Jan 27 15:49:36 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 15:49:36 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:36 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 15:49:37 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 15:49:38 crc kubenswrapper[4767]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.088833 4767 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094382 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094411 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094417 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094422 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094430 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094434 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094438 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094442 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094446 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094449 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094452 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094457 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094462 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094466 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094471 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094475 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094478 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094483 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094487 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094491 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094496 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094500 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094503 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094507 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094510 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094514 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094517 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094520 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094524 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094528 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094532 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094535 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094539 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094542 4767 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094545 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094550 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094554 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094558 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094562 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094565 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094568 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094572 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094576 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094580 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094583 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094586 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094590 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094593 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094597 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094600 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094603 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094608 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094612 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094615 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094619 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094622 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094625 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094629 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094632 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094635 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094640 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094644 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094647 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094650 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094654 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094658 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094661 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094664 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094667 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094671 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.094674 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094759 4767 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094769 4767 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094778 4767 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094784 4767 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094789 4767 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094794 4767 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094799 4767 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094804 4767 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094808 4767 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094812 4767 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094816 4767 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094821 4767 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094825 4767 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094829 4767 flags.go:64] FLAG: --cgroup-root="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094833 4767 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094837 4767 flags.go:64] FLAG: --client-ca-file="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094841 4767 flags.go:64] FLAG: --cloud-config="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094845 4767 flags.go:64] FLAG: --cloud-provider="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094849 4767 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094854 4767 flags.go:64] FLAG: --cluster-domain="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094858 4767 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094866 4767 flags.go:64] FLAG: --config-dir="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094870 4767 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094874 4767 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094885 4767 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094889 4767 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094893 4767 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094899 4767 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094903 4767 flags.go:64] FLAG: --contention-profiling="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094908 4767 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094913 4767 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094918 4767 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094922 4767 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094928 4767 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094942 4767 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094948 4767 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094955 4767 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094960 4767 flags.go:64] FLAG: --enable-server="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094964 4767 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094971 4767 flags.go:64] FLAG: --event-burst="100" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094976 4767 flags.go:64] FLAG: --event-qps="50" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094981 4767 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094985 4767 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094990 4767 flags.go:64] FLAG: --eviction-hard="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.094996 4767 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095001 4767 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095006 4767 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095011 4767 flags.go:64] FLAG: --eviction-soft="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095015 4767 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095020 4767 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095025 4767 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095030 4767 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095035 4767 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095042 4767 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095047 4767 flags.go:64] FLAG: --feature-gates="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095052 4767 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095057 4767 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095062 4767 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095067 4767 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095072 4767 flags.go:64] FLAG: --healthz-port="10248" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095077 4767 flags.go:64] FLAG: --help="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095081 4767 flags.go:64] FLAG: --hostname-override="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095085 4767 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095089 4767 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095093 4767 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095098 4767 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095103 4767 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095109 4767 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095114 4767 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095118 4767 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095123 4767 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095128 4767 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095140 4767 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095145 4767 flags.go:64] FLAG: --kube-reserved="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095150 4767 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095154 4767 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095159 4767 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095164 4767 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095169 4767 flags.go:64] FLAG: --lock-file="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095173 4767 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095179 4767 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095183 4767 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095193 4767 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095213 4767 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095217 4767 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095224 4767 flags.go:64] FLAG: --logging-format="text" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095228 4767 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095232 4767 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095236 4767 flags.go:64] FLAG: --manifest-url="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095240 4767 flags.go:64] FLAG: --manifest-url-header="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095246 4767 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095250 4767 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095255 4767 flags.go:64] FLAG: --max-pods="110" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095260 4767 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095264 4767 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095268 4767 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095272 4767 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095276 4767 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095280 4767 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095284 4767 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095294 4767 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095298 4767 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095302 4767 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095306 4767 flags.go:64] FLAG: --pod-cidr="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095310 4767 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095317 4767 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095320 4767 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095325 4767 flags.go:64] FLAG: --pods-per-core="0" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095329 4767 flags.go:64] FLAG: --port="10250" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095333 4767 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095337 4767 flags.go:64] FLAG: --provider-id="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095341 4767 flags.go:64] FLAG: --qos-reserved="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095345 4767 flags.go:64] FLAG: --read-only-port="10255" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095350 4767 flags.go:64] FLAG: --register-node="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095355 4767 flags.go:64] FLAG: --register-schedulable="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095359 4767 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095368 4767 flags.go:64] FLAG: --registry-burst="10" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095373 4767 flags.go:64] FLAG: --registry-qps="5" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095379 4767 flags.go:64] FLAG: --reserved-cpus="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095383 4767 flags.go:64] FLAG: --reserved-memory="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095388 4767 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095392 4767 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095397 4767 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095401 4767 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095405 4767 flags.go:64] FLAG: --runonce="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095409 4767 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095413 4767 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095417 4767 flags.go:64] FLAG: --seccomp-default="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095421 4767 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095425 4767 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095429 4767 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095434 4767 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095438 4767 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095442 4767 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095446 4767 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095450 4767 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095455 4767 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095459 4767 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095463 4767 flags.go:64] FLAG: --system-cgroups="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095467 4767 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095474 4767 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095478 4767 flags.go:64] FLAG: --tls-cert-file="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095481 4767 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095486 4767 flags.go:64] FLAG: --tls-min-version="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095492 4767 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095496 4767 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095500 4767 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095504 4767 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095508 4767 flags.go:64] FLAG: --v="2" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095513 4767 flags.go:64] FLAG: --version="false" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095520 4767 flags.go:64] FLAG: --vmodule="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095524 4767 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095529 4767 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095625 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095630 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095634 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095638 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095642 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095645 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095649 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095654 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095658 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095662 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095666 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095670 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095673 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095677 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095680 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095684 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095687 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095691 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095694 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095698 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095701 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095704 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095708 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095712 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095717 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095720 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095725 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095729 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095733 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095737 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095740 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095743 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095747 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095751 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095754 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095757 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095760 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095764 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095768 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095771 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095774 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095778 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095782 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095786 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095789 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095793 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095796 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095799 4767 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095803 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095807 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095810 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095813 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095817 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095820 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095824 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095827 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095830 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095834 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095837 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095840 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095844 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095847 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095851 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095855 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095859 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095863 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095867 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095871 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095876 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095879 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.095883 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.095894 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.110217 4767 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.110711 4767 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110825 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110837 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110841 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110847 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110855 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110861 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110868 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110874 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110879 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110887 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110892 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110896 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110901 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110907 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110911 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110916 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110920 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110924 4767 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110929 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110933 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110938 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110943 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110948 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110952 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110956 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110962 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110967 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110974 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110977 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110981 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110985 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110989 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110994 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.110999 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111004 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111008 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111013 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111020 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111027 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111032 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111036 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111041 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111046 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111050 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111054 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111057 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111061 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111064 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111068 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111075 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111079 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111084 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111089 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111092 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111096 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111100 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111103 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111107 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111111 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111114 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111118 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111122 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111126 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111132 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111136 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111140 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111143 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111147 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111151 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111155 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111158 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.111165 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111326 4767 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111335 4767 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111340 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111344 4767 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111348 4767 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111352 4767 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111357 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111361 4767 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111365 4767 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111371 4767 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111376 4767 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111380 4767 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111384 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111388 4767 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111392 4767 feature_gate.go:330] unrecognized feature gate: Example Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111396 4767 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111400 4767 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111404 4767 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111408 4767 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111412 4767 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111415 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111419 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111423 4767 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111427 4767 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111431 4767 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111435 4767 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111439 4767 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111442 4767 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111446 4767 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111505 4767 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111511 4767 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111517 4767 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111522 4767 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111525 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111529 4767 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111533 4767 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111537 4767 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111542 4767 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111546 4767 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111549 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111552 4767 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111556 4767 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111559 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111566 4767 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111571 4767 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111575 4767 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111578 4767 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111582 4767 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111586 4767 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111590 4767 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111594 4767 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111598 4767 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111602 4767 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111606 4767 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111610 4767 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111613 4767 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111617 4767 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111621 4767 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111624 4767 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111628 4767 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111631 4767 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111635 4767 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111639 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111642 4767 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111645 4767 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111649 4767 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111652 4767 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111655 4767 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111660 4767 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111664 4767 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.111668 4767 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.111675 4767 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.111899 4767 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.117366 4767 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.117524 4767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.119186 4767 server.go:997] "Starting client certificate rotation" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.119260 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.120130 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 08:09:48.790106045 +0000 UTC Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.120245 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.146693 4767 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.149275 4767 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.149388 4767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.165082 4767 log.go:25] "Validated CRI v1 runtime API" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.200506 4767 log.go:25] "Validated CRI v1 image API" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.206421 4767 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.211447 4767 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-15-44-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.211496 4767 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.235719 4767 manager.go:217] Machine: {Timestamp:2026-01-27 15:49:38.232139607 +0000 UTC m=+0.621157150 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e BootID:2cd8151d-a43c-49a6-97ea-751da1662943 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:b7:3c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:b7:3c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a7:31:75 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ec:45:77 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7b:99:0e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:10:cf:2e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:f6:23:99:1a:00 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:dc:54:e1:c9:af Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.236646 4767 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.236992 4767 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.239274 4767 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.239783 4767 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.239944 4767 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.240462 4767 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.240525 4767 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.241254 4767 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.241351 4767 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.242184 4767 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.242370 4767 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.247794 4767 kubelet.go:418] "Attempting to sync node with API server" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.247925 4767 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.248007 4767 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.248070 4767 kubelet.go:324] "Adding apiserver pod source" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.248141 4767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.253714 4767 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.254874 4767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.257542 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.257720 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.257533 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.257865 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.257963 4767 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260673 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260705 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260715 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260750 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260779 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260787 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260794 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260806 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260814 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260822 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260838 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.260846 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.262073 4767 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.262594 4767 server.go:1280] "Started kubelet" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.262857 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:38 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.267846 4767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.267959 4767 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.268622 4767 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.269828 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.269888 4767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.270006 4767 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.270019 4767 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.270359 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:25:22.12275499 +0000 UTC Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.271582 4767 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.272148 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.272374 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.271609 4767 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273542 4767 factory.go:153] Registering CRI-O factory Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273568 4767 factory.go:221] Registration of the crio container factory successfully Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273662 4767 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273671 4767 factory.go:55] Registering systemd factory Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273678 4767 factory.go:221] Registration of the systemd container factory successfully Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273698 4767 factory.go:103] Registering Raw factory Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.273809 4767 manager.go:1196] Started watching for new ooms in manager Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.274582 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.274741 4767 manager.go:319] Starting recovery of all containers Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.280663 4767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea13af919819a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:49:38.262557082 +0000 UTC m=+0.651574605,LastTimestamp:2026-01-27 15:49:38.262557082 +0000 UTC m=+0.651574605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.284495 4767 server.go:460] "Adding debug handlers to kubelet server" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286165 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286242 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286258 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286271 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286281 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286292 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286304 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286317 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286331 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286343 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286356 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286371 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286388 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286402 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286416 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286427 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286438 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286449 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286465 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286478 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286491 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286504 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286517 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286531 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286544 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286558 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286577 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286594 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286609 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286628 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286643 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286658 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286674 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286689 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286710 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286726 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286739 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286762 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286788 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286806 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286819 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286836 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286852 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286866 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286880 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286893 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286906 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286920 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286933 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286949 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286963 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286976 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.286996 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287011 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287025 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287041 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287054 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287067 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287082 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287095 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287107 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287121 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287141 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287155 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287168 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287187 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287221 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287236 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287250 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287263 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287277 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287289 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287303 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287317 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287333 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287347 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287515 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287530 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287543 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287558 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287572 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287587 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287601 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287614 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287627 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287639 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287652 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287665 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287677 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287691 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287706 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287719 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287731 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287746 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287759 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287772 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287784 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287798 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287812 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287824 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287839 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287851 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287864 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287876 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287895 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287909 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287927 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.287941 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288003 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288019 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288033 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288047 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288063 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288076 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288089 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288104 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288117 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288134 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288146 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288158 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288169 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288183 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288224 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288239 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288251 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288264 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288277 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288290 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288302 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288314 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288327 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288339 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288350 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288363 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288375 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288391 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288405 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288420 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288435 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288448 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288460 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288474 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288487 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288500 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288512 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288526 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.288539 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290170 4767 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290213 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290229 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290245 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290258 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290271 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290284 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290298 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290311 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290326 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290338 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290354 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290366 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290378 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290391 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290402 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290414 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290428 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290444 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290458 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290481 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290496 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290507 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290519 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290532 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290547 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290559 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290570 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290583 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290595 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290617 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290633 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290646 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290660 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290673 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290687 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290700 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290713 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290725 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290738 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290750 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290761 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290774 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290787 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290808 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290822 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290836 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290852 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290866 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290882 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290901 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290915 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290930 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290942 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290955 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290970 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.290984 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291003 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291018 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291032 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291046 4767 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291058 4767 reconstruct.go:97] "Volume reconstruction finished" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.291066 4767 reconciler.go:26] "Reconciler: start to sync state" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.300863 4767 manager.go:324] Recovery completed Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.314910 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.319191 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.319255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.319265 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.321743 4767 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.321757 4767 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.321782 4767 state_mem.go:36] "Initialized new in-memory state store" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.321776 4767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.324134 4767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.324213 4767 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.324262 4767 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.324323 4767 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.325296 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.325386 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.337429 4767 policy_none.go:49] "None policy: Start" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.338911 4767 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.338988 4767 state_mem.go:35] "Initializing new in-memory state store" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.372733 4767 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.401543 4767 manager.go:334] "Starting Device Plugin manager" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.401835 4767 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.401855 4767 server.go:79] "Starting device plugin registration server" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.402193 4767 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.402227 4767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.402322 4767 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.402407 4767 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.402415 4767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.408407 4767 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.426730 4767 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.426851 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429417 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429433 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429570 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429766 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.429887 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430478 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430665 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430725 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.430968 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431091 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431228 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431295 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431329 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431494 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431762 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431840 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431929 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431951 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.431961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432165 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432187 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432533 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432559 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.432968 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.433222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.433245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.433257 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.475323 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.492795 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.492843 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.492879 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.492902 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493060 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493155 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493214 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493240 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493261 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493291 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493320 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493376 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.493481 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.502480 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.503847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.503886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.503896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.503920 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.504459 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595303 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595410 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595437 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595479 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595502 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595568 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595587 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595641 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595676 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595723 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595744 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595805 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595813 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595803 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595870 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595894 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595906 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595913 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595932 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595964 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595975 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.595999 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596004 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596018 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596034 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596025 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596090 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596069 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596050 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.596219 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.705570 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.707218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.707260 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.707271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.707294 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.707746 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.754336 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.778677 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.789446 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.797952 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e03feea9d08a4357592039d81ba4fa73b23aa3f30a18108138b1cf7679cb91e5 WatchSource:0}: Error finding container e03feea9d08a4357592039d81ba4fa73b23aa3f30a18108138b1cf7679cb91e5: Status 404 returned error can't find the container with id e03feea9d08a4357592039d81ba4fa73b23aa3f30a18108138b1cf7679cb91e5 Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.802880 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: I0127 15:49:38.808238 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.813522 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-46285fce5692fca4b5d2faf4afb8b5554266711f08bed0d12dd67d3d3b24aa3f WatchSource:0}: Error finding container 46285fce5692fca4b5d2faf4afb8b5554266711f08bed0d12dd67d3d3b24aa3f: Status 404 returned error can't find the container with id 46285fce5692fca4b5d2faf4afb8b5554266711f08bed0d12dd67d3d3b24aa3f Jan 27 15:49:38 crc kubenswrapper[4767]: W0127 15:49:38.826239 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7c430fb2e43a15bf4875e686ca601747747e973988a7885fdf4267df7bfafaa7 WatchSource:0}: Error finding container 7c430fb2e43a15bf4875e686ca601747747e973988a7885fdf4267df7bfafaa7: Status 404 returned error can't find the container with id 7c430fb2e43a15bf4875e686ca601747747e973988a7885fdf4267df7bfafaa7 Jan 27 15:49:38 crc kubenswrapper[4767]: E0127 15:49:38.877238 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.108552 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.109723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.109756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.109765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.109814 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.110385 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 27 15:49:39 crc kubenswrapper[4767]: W0127 15:49:39.208760 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.208861 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.264902 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.271122 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:06:32.556051911 +0000 UTC Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.329391 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33a64518d9051042c2a947a376de67e40398dcc38adf86c2651e20a1863f72f9"} Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.330480 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"46285fce5692fca4b5d2faf4afb8b5554266711f08bed0d12dd67d3d3b24aa3f"} Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.331870 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e03feea9d08a4357592039d81ba4fa73b23aa3f30a18108138b1cf7679cb91e5"} Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.333934 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7c430fb2e43a15bf4875e686ca601747747e973988a7885fdf4267df7bfafaa7"} Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.337331 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5788644c7a3ff8b72a1149a755ec9a9007ec7a9596bee8dd5b30794c47bbd2e6"} Jan 27 15:49:39 crc kubenswrapper[4767]: W0127 15:49:39.350627 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.350732 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:39 crc kubenswrapper[4767]: W0127 15:49:39.538883 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.538967 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.678699 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 27 15:49:39 crc kubenswrapper[4767]: W0127 15:49:39.744126 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.744310 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.910976 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.912161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.912213 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.912224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:39 crc kubenswrapper[4767]: I0127 15:49:39.912247 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:39 crc kubenswrapper[4767]: E0127 15:49:39.912684 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.258541 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:49:40 crc kubenswrapper[4767]: E0127 15:49:40.260290 4767 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.264291 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.271405 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:32:19.202520929 +0000 UTC Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.342570 4767 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88" exitCode=0 Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.342663 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.342751 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.343303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.343329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.343338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.351473 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.351546 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.351561 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.351572 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.351500 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.352345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.352371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.352379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353028 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220" exitCode=0 Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353105 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353767 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.353777 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.354481 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a" exitCode=0 Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.354553 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.354583 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.355542 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.357215 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.359534 4767 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f" exitCode=0 Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.359575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f"} Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.359661 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.360419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.360440 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:40 crc kubenswrapper[4767]: I0127 15:49:40.360449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:40 crc kubenswrapper[4767]: W0127 15:49:40.812677 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:40 crc kubenswrapper[4767]: E0127 15:49:40.813043 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:41 crc kubenswrapper[4767]: W0127 15:49:41.045026 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:41 crc kubenswrapper[4767]: E0127 15:49:41.045122 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.135034 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.264913 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.272117 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:58:45.523589905 +0000 UTC Jan 27 15:49:41 crc kubenswrapper[4767]: E0127 15:49:41.280798 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.363972 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.364008 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.364019 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.364033 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.365280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.365322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.365339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:41 crc kubenswrapper[4767]: W0127 15:49:41.367360 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Jan 27 15:49:41 crc kubenswrapper[4767]: E0127 15:49:41.367432 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.368266 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.368306 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.368321 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.368332 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.370384 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df" exitCode=0 Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.370416 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.370523 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.371635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.371714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.371783 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.372549 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.372839 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.373034 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e4e5bddbfbc9603046959d0ee01d0f797d0098ce21700eec3931967e9f471084"} Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.373531 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.373563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.373573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.374251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.374276 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.374289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.513683 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.514871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.514909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.514925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:41 crc kubenswrapper[4767]: I0127 15:49:41.514953 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:41 crc kubenswrapper[4767]: E0127 15:49:41.515391 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.273386 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:12:15.037534353 +0000 UTC Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.378938 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde"} Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.379005 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.380622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.380668 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.380686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.383580 4767 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334" exitCode=0 Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.383763 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.383920 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.383663 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334"} Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.383849 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.384304 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.384455 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386860 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.386873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.387530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.387553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.387562 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.387979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.388001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:42 crc kubenswrapper[4767]: I0127 15:49:42.388008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.274251 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:17:08.723728155 +0000 UTC Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392346 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b50cd3e07e1be2c4acfe5f7f9b2d7c2081cd707ac79700e58a4a69365be9061"} Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392398 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"998b9f77d56b09a7f43564fb2cfd1a2f0c7667ead472a734e1619bb36d063e0a"} Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392424 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392431 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"48bed3f848319c4c0a83edb33a6e88a70259e1abfcd75f44bb4cd5cf84166355"} Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392448 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d814e1bbd3556790ea49fa61224968631434d61369aa14a3cbc4f54161ccf4b"} Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392465 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"765188294d75bfe9dcdf6ee636af3821fc26b00005e03e3d4330b9e097824a02"} Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392468 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.392431 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393746 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393741 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.393757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:43 crc kubenswrapper[4767]: I0127 15:49:43.408695 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.275175 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:11:05.561584495 +0000 UTC Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.394683 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.394742 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.394810 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.395903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.396011 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.395935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.396138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.396155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.396084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.514693 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.716082 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.717484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.717528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.717544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.717574 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:49:44 crc kubenswrapper[4767]: I0127 15:49:44.791992 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.275534 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:36:51.375646541 +0000 UTC Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.317832 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.397235 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.397284 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.397284 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.398530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:45 crc kubenswrapper[4767]: I0127 15:49:45.461521 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.276337 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:19:22.330996005 +0000 UTC Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.399579 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.400566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.400637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.400652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.474711 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.474951 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.476189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.476257 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:46 crc kubenswrapper[4767]: I0127 15:49:46.476270 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:47 crc kubenswrapper[4767]: I0127 15:49:47.277487 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:00:20.693686821 +0000 UTC Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.388427 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:01:59.590872863 +0000 UTC Jan 27 15:49:48 crc kubenswrapper[4767]: E0127 15:49:48.408926 4767 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.426349 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.426542 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.427748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.427794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:48 crc kubenswrapper[4767]: I0127 15:49:48.427810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.094922 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.244934 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.252975 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.389193 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:13:37.905355847 +0000 UTC Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.408921 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.409997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.410044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.410059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:49 crc kubenswrapper[4767]: I0127 15:49:49.414053 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:50 crc kubenswrapper[4767]: I0127 15:49:50.390156 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:02:16.980348756 +0000 UTC Jan 27 15:49:50 crc kubenswrapper[4767]: I0127 15:49:50.411555 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:50 crc kubenswrapper[4767]: I0127 15:49:50.412586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:50 crc kubenswrapper[4767]: I0127 15:49:50.412617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:50 crc kubenswrapper[4767]: I0127 15:49:50.412627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:51 crc kubenswrapper[4767]: I0127 15:49:51.390654 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:25:47.26563156 +0000 UTC Jan 27 15:49:51 crc kubenswrapper[4767]: I0127 15:49:51.413798 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:51 crc kubenswrapper[4767]: I0127 15:49:51.414534 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:51 crc kubenswrapper[4767]: I0127 15:49:51.414558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:51 crc kubenswrapper[4767]: I0127 15:49:51.414567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:52 crc kubenswrapper[4767]: I0127 15:49:52.095538 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:49:52 crc kubenswrapper[4767]: I0127 15:49:52.095608 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:49:52 crc kubenswrapper[4767]: W0127 15:49:52.251911 4767 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 15:49:52 crc kubenswrapper[4767]: I0127 15:49:52.252057 4767 trace.go:236] Trace[160895698]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:49:42.250) (total time: 10001ms): Jan 27 15:49:52 crc kubenswrapper[4767]: Trace[160895698]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:49:52.251) Jan 27 15:49:52 crc kubenswrapper[4767]: Trace[160895698]: [10.001352845s] [10.001352845s] END Jan 27 15:49:52 crc kubenswrapper[4767]: E0127 15:49:52.252093 4767 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 15:49:52 crc kubenswrapper[4767]: I0127 15:49:52.265840 4767 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 15:49:52 crc kubenswrapper[4767]: I0127 15:49:52.391329 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:24:39.791109306 +0000 UTC Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.292940 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.293036 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.297854 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.297920 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.391604 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:27:11.732519858 +0000 UTC Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.418497 4767 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]log ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]etcd ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-filter ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-apiextensions-informers ok Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-system-namespaces-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/bootstrap-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/start-kube-aggregator-informers ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 27 15:49:53 crc kubenswrapper[4767]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]autoregister-completion ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-openapi-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 27 15:49:53 crc kubenswrapper[4767]: livez check failed Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.418571 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.420005 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.421762 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde" exitCode=255 Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.421821 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde"} Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.421987 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.422822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.422868 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.422882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:53 crc kubenswrapper[4767]: I0127 15:49:53.423575 4767 scope.go:117] "RemoveContainer" containerID="a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.392765 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:54:24.491932322 +0000 UTC Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.426690 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.428570 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f"} Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.428765 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.429573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.429610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.429622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.814721 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.814918 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.816144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.816188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.816216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:54 crc kubenswrapper[4767]: I0127 15:49:54.827787 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 15:49:55 crc kubenswrapper[4767]: I0127 15:49:55.393552 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:09:06.454742399 +0000 UTC Jan 27 15:49:55 crc kubenswrapper[4767]: I0127 15:49:55.431062 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:55 crc kubenswrapper[4767]: I0127 15:49:55.432427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:55 crc kubenswrapper[4767]: I0127 15:49:55.432479 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:55 crc kubenswrapper[4767]: I0127 15:49:55.432493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.393750 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:09:02.618684882 +0000 UTC Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.475333 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.475539 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.476748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.476799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:49:56 crc kubenswrapper[4767]: I0127 15:49:56.476811 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.048893 4767 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.258080 4767 apiserver.go:52] "Watching apiserver" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.264729 4767 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.264966 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.265593 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.265621 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.265670 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:57 crc kubenswrapper[4767]: E0127 15:49:57.265674 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.265593 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:57 crc kubenswrapper[4767]: E0127 15:49:57.265722 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.265631 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:49:57 crc kubenswrapper[4767]: E0127 15:49:57.265907 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.266155 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.267568 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.268103 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.268299 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.268493 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.268765 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.268931 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.269415 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.269456 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.270022 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.273085 4767 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.299864 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.312999 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.322461 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.331455 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.343908 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.354297 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.363865 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.374283 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.383921 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.394490 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:39:03.747654781 +0000 UTC Jan 27 15:49:57 crc kubenswrapper[4767]: I0127 15:49:57.398707 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.281424 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.282776 4767 trace.go:236] Trace[1678005254]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:49:46.323) (total time: 11958ms): Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[1678005254]: ---"Objects listed" error: 11958ms (15:49:58.282) Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[1678005254]: [11.958955113s] [11.958955113s] END Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.282801 4767 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.283833 4767 trace.go:236] Trace[1311652545]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:49:46.520) (total time: 11763ms): Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[1311652545]: ---"Objects listed" error: 11763ms (15:49:58.283) Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[1311652545]: [11.763685752s] [11.763685752s] END Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.283865 4767 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.284373 4767 trace.go:236] Trace[524241017]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 15:49:46.360) (total time: 11924ms): Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[524241017]: ---"Objects listed" error: 11924ms (15:49:58.284) Jan 27 15:49:58 crc kubenswrapper[4767]: Trace[524241017]: [11.924265304s] [11.924265304s] END Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.284393 4767 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.284642 4767 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.287475 4767 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.295967 4767 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.320440 4767 csr.go:261] certificate signing request csr-qx76x is approved, waiting to be issued Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.337469 4767 csr.go:257] certificate signing request csr-qx76x is issued Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.341214 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.352843 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.365191 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385509 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385561 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385590 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385615 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385649 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385672 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385695 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385719 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385745 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385792 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385818 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385849 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385871 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385891 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385912 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385930 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385946 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385962 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385977 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385994 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.385979 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386025 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386010 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386095 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386116 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386133 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386152 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386169 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386185 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386229 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386263 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386279 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386296 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386313 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386328 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386346 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386351 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386363 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386381 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386400 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386417 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386434 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386452 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386503 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386544 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386573 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386592 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386612 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386631 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386635 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386651 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386695 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386723 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386748 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386776 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386788 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386801 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386855 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386893 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386929 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386947 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386972 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386963 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387055 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387161 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387217 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387269 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387311 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387432 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387440 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387452 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387451 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387550 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387565 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387652 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387650 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387681 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.387761 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:49:58.887735774 +0000 UTC m=+21.276753377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387778 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387831 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387849 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387877 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387917 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.387983 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.386965 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388037 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388049 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388073 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388074 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388091 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388108 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388127 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388144 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388161 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388178 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388221 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388242 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388243 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388258 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388278 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388293 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388304 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388311 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388329 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388345 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388363 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388378 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388395 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388409 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388424 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388441 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388459 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388478 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388495 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388510 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388527 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388546 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388564 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388579 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388594 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388609 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388625 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388645 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388660 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388681 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388704 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388728 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388758 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388778 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388803 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388821 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388843 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388865 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388893 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388913 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388992 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389021 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389042 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389061 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389119 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389139 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389158 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389180 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390053 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390422 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390445 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390462 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390483 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390499 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390516 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390535 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390553 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390571 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390588 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390604 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390625 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390641 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390656 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390673 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390690 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390708 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390725 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390976 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391049 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391075 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391135 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391159 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391244 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391266 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391299 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391425 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391447 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391467 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391488 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391512 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391537 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391560 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391582 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391605 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391625 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391644 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391709 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391731 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391755 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391776 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391796 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391816 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391837 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391862 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391880 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391900 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391917 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391935 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391953 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391972 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391990 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392007 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392026 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392042 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392060 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392077 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.398688 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399874 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400372 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400413 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400433 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400453 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400476 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400495 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400521 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400538 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400559 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400582 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400601 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400620 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400668 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400688 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400716 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400739 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400758 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400778 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400797 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400816 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400833 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400850 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400869 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400891 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388376 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388488 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400973 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388605 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388601 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388620 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388656 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388676 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388794 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.388956 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389140 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389159 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389162 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389487 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389569 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389615 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389704 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389721 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.389967 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390087 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390131 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390312 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390331 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390362 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390953 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.390987 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391106 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391214 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.391570 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392381 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392563 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392633 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392723 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392737 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392799 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.392998 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.393419 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399263 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399344 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:35:23.909450112 +0000 UTC Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399356 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399704 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399792 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399777 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399853 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399844 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399917 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.399977 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400064 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400166 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400378 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400614 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400930 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400937 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.401638 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400886 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.400943 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402070 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402119 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402187 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402239 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402269 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402304 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402333 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402365 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402398 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402427 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402457 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402483 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402515 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402594 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402612 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402631 4767 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402645 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402658 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402672 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402687 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402701 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402715 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402727 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402741 4767 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402758 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402774 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402787 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402800 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402814 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402827 4767 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402839 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402851 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402864 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402880 4767 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402896 4767 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.401676 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402909 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.401754 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402924 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402939 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402952 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402966 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402979 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402994 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403007 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403020 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403033 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403047 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403063 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403076 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403090 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403102 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403114 4767 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403129 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403143 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.401954 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402025 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402063 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402256 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402333 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402607 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.402607 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.402985 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403295 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.403330 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:49:58.903305895 +0000 UTC m=+21.292323418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403382 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403693 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403715 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.404510 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.404522 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.404805 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.405246 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.405633 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.405760 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.405776 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.405997 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.406072 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.406118 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.406621 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.406848 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.407181 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.407231 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.407534 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.407576 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.408537 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.406230 4767 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.409581 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.409661 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.410849 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.411450 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.411538 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.411672 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.411783 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412059 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412076 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.403145 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.412220 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:49:58.912174932 +0000 UTC m=+21.301192635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412237 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412242 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412333 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412506 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412548 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.412992 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413004 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413317 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413410 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413439 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413640 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413657 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413832 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.413948 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414101 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414271 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414306 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414480 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414556 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.414981 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.437379 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.439047 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.454632 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.455189 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.455712 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.456236 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.457383 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.457605 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.457715 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.457969 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.458302 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.458341 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.458354 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.458406 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:49:58.95838855 +0000 UTC m=+21.347406063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.458512 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.458921 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.460620 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.461415 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.465243 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.466598 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.403126 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.467381 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.467592 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.467749 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.467780 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.468527 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.468704 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.468857 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.469328 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.469499 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.469694 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.469715 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.469728 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.469772 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:49:58.969756809 +0000 UTC m=+21.358774332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.470004 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.470243 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.470313 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.470393 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.474093 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.474118 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.481463 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.488654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.489402 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.490340 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.492920 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.493244 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.493279 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.494050 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.495343 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.497464 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.497492 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.498250 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.498252 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.498353 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.498246 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.499003 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.501625 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.502117 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504314 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504349 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504442 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504454 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504465 4767 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504473 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504481 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504491 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504499 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504508 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504518 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504533 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504548 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504559 4767 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504572 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504582 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504592 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504603 4767 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504613 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504623 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504633 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504643 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504652 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504662 4767 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504672 4767 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504681 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504692 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504702 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504714 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504725 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504737 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504747 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504758 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504769 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504781 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504792 4767 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504801 4767 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504809 4767 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504818 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504827 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504837 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504852 4767 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504869 4767 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504882 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504915 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504925 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504935 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504943 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504952 4767 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504960 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504969 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504977 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504986 4767 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.504996 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505004 4767 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505013 4767 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505022 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505030 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505043 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505051 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505060 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505069 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505077 4767 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505085 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505093 4767 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505101 4767 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505110 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505118 4767 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505126 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505134 4767 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505142 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505150 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505158 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505166 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505174 4767 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505182 4767 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505190 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505212 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505221 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505229 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505237 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505244 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505254 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505262 4767 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505271 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505279 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505288 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505287 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505296 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505339 4767 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505349 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505358 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505369 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505378 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505387 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505382 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505395 4767 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505424 4767 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505437 4767 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505446 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505454 4767 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505462 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505449 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505470 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505478 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505486 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505495 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505503 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505512 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505520 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505528 4767 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505539 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505547 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505554 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505562 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505570 4767 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505577 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505585 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505594 4767 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505602 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505610 4767 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505617 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505625 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505633 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505641 4767 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505649 4767 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505657 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505664 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505672 4767 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505681 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505688 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505696 4767 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505704 4767 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505712 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505720 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505727 4767 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505735 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505743 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505751 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505759 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505766 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505774 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505783 4767 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505790 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505799 4767 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505806 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505814 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505822 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.505831 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.510684 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.511533 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.511701 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.512120 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.512171 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.525039 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.530066 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.541045 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.548514 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.555877 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.561145 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.576470 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.591651 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606047 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606331 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606361 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606376 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606389 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606404 4767 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606417 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606432 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606445 4767 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606459 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.606472 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.618349 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.632755 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.784492 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.793988 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.799502 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 15:49:58 crc kubenswrapper[4767]: W0127 15:49:58.802705 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0dfbdbce0d219de2d830f3e30089d0c85c8289087277b8b196fbd08061aeb42c WatchSource:0}: Error finding container 0dfbdbce0d219de2d830f3e30089d0c85c8289087277b8b196fbd08061aeb42c: Status 404 returned error can't find the container with id 0dfbdbce0d219de2d830f3e30089d0c85c8289087277b8b196fbd08061aeb42c Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.908175 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:58 crc kubenswrapper[4767]: I0127 15:49:58.908286 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.908439 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.908555 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:49:59.908508815 +0000 UTC m=+22.297526338 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:49:58 crc kubenswrapper[4767]: E0127 15:49:58.908627 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:49:59.908614678 +0000 UTC m=+22.297632411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.009179 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.009240 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.009268 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009366 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009380 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009389 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009424 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:00.009412087 +0000 UTC m=+22.398429610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009672 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009710 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:00.009701506 +0000 UTC m=+22.398719029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009826 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009856 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009869 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.009936 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:00.009917492 +0000 UTC m=+22.398935085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.107179 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.115030 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.129516 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.144487 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.157667 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.175319 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.196509 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.207091 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.207775 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.236099 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.273297 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.310393 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.325020 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.325143 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.325570 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.325631 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.325671 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.325710 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.339077 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 15:44:58 +0000 UTC, rotation deadline is 2026-10-25 07:34:29.211939935 +0000 UTC Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.339163 4767 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6495h44m29.872780618s for next certificate rotation Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.340318 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.351366 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.364956 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.376655 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.384876 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.396004 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.405281 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:42:32.788413532 +0000 UTC Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.477445 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cee6baa4dba683b6e0c0a6d61b731abae4bb773bc077e2468e12ea803ccbf77c"} Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.479546 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15"} Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.479584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f"} Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.479597 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0dfbdbce0d219de2d830f3e30089d0c85c8289087277b8b196fbd08061aeb42c"} Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.481289 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346"} Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.481349 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ce2651e9029e7c91db8a16f1729f292cf1d938ad669ab7c2b8306286184ff468"} Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.490656 4767 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.494862 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-cksm8"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.495100 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.495298 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.497394 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.497460 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.497708 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.517231 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.529279 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.545034 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.557241 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.568638 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.578010 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.589612 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.607109 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.615614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-hosts-file\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.615709 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx27l\" (UniqueName: \"kubernetes.io/projected/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-kube-api-access-lx27l\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.619073 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.629517 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.642423 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.650947 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.661462 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.673444 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.684929 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.696636 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.716949 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx27l\" (UniqueName: \"kubernetes.io/projected/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-kube-api-access-lx27l\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.717003 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-hosts-file\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.718374 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-hosts-file\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.737019 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx27l\" (UniqueName: \"kubernetes.io/projected/3b53edc9-0d4a-4d33-ba63-43a9dc551cef-kube-api-access-lx27l\") pod \"node-resolver-cksm8\" (UID: \"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\") " pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.808800 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-cksm8" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.860605 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mrkmx"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.861031 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.862983 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: W0127 15:49:59.862980 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b53edc9_0d4a_4d33_ba63_43a9dc551cef.slice/crio-00028bdd2b33b4946fa9c6a149f95e683f03d8747fa0abdbe4f7451984be6ff0 WatchSource:0}: Error finding container 00028bdd2b33b4946fa9c6a149f95e683f03d8747fa0abdbe4f7451984be6ff0: Status 404 returned error can't find the container with id 00028bdd2b33b4946fa9c6a149f95e683f03d8747fa0abdbe4f7451984be6ff0 Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.863781 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.863910 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.864517 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xgf2q"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.865321 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.868495 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.868508 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.868732 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.869302 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.871484 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.872102 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.872513 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zfxc7"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.872887 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.874753 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.875647 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.875878 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.877552 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x97k7"] Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.878318 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.886645 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.886673 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.886875 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.886994 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.887015 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.887087 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.887185 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.890843 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.918742 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.918843 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.918955 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.918977 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:50:01.918954358 +0000 UTC m=+24.307971881 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919019 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919051 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919073 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-socket-dir-parent\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-netns\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.919108 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919115 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-daemon-config\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919134 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f3fb7f5-2925-4714-9e7b-44749885b298-mcd-auth-proxy-config\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:49:59 crc kubenswrapper[4767]: E0127 15:49:59.919153 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:01.919142654 +0000 UTC m=+24.308160177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919177 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919220 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919245 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919264 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-multus-certs\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919285 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f3fb7f5-2925-4714-9e7b-44749885b298-proxy-tls\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919308 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-system-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919362 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgcn\" (UniqueName: \"kubernetes.io/projected/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-kube-api-access-wdgcn\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919395 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cnibin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919440 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919463 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919485 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f3fb7f5-2925-4714-9e7b-44749885b298-rootfs\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919507 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-k8s-cni-cncf-io\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919526 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919544 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919564 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cnibin\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919587 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919654 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919744 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919775 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-conf-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919795 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-os-release\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919818 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919842 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczrn\" (UniqueName: \"kubernetes.io/projected/6f3fb7f5-2925-4714-9e7b-44749885b298-kube-api-access-qczrn\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919905 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cni-binary-copy\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919926 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919945 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919965 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.919988 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-kubelet\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920009 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-multus\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920032 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920060 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxz8\" (UniqueName: \"kubernetes.io/projected/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-kube-api-access-kdxz8\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920085 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-os-release\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920107 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920129 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920151 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnqj\" (UniqueName: \"kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920172 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-hostroot\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920195 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-etc-kubernetes\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920251 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920272 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-bin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920293 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.920315 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.930580 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.940457 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.950624 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.958468 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.969919 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.982172 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:49:59 crc kubenswrapper[4767]: I0127 15:49:59.991421 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.000435 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.011157 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021688 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021739 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021764 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-socket-dir-parent\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021790 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-netns\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021811 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-daemon-config\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021809 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021831 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f3fb7f5-2925-4714-9e7b-44749885b298-mcd-auth-proxy-config\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021875 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-netns\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022108 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022153 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-socket-dir-parent\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022513 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.021873 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022717 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f3fb7f5-2925-4714-9e7b-44749885b298-mcd-auth-proxy-config\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022747 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-multus-certs\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022772 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f3fb7f5-2925-4714-9e7b-44749885b298-proxy-tls\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022799 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022799 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-system-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022774 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-multus-certs\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022854 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022880 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgcn\" (UniqueName: \"kubernetes.io/projected/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-kube-api-access-wdgcn\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022923 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cnibin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022946 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022966 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022989 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f3fb7f5-2925-4714-9e7b-44749885b298-rootfs\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023006 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cnibin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023013 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-k8s-cni-cncf-io\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023035 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023079 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-daemon-config\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023055 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023132 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cnibin\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023155 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023173 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023219 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023238 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023255 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-conf-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023274 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-os-release\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023293 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023311 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qczrn\" (UniqueName: \"kubernetes.io/projected/6f3fb7f5-2925-4714-9e7b-44749885b298-kube-api-access-qczrn\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.022957 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-system-cni-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023330 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.023369 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.023408 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:02.023397623 +0000 UTC m=+24.412415216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023404 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023437 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cni-binary-copy\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023457 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023474 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023495 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023500 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023539 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-kubelet\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023516 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-kubelet\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023572 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023580 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-multus\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023604 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023626 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023628 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cnibin\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023635 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxz8\" (UniqueName: \"kubernetes.io/projected/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-kube-api-access-kdxz8\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023664 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-run-k8s-cni-cncf-io\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023669 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-os-release\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023695 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023768 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023789 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6f3fb7f5-2925-4714-9e7b-44749885b298-rootfs\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023803 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023921 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-os-release\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023945 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.023987 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnqj\" (UniqueName: \"kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024004 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-hostroot\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024022 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-etc-kubernetes\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024040 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024060 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-bin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024078 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024159 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024170 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-cni-binary-copy\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024237 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024275 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024419 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024550 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024776 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-multus\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024793 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024821 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-hostroot\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-os-release\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024836 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-etc-kubernetes\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024855 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024881 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-host-var-lib-cni-bin\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024898 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-multus-conf-dir\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024909 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.024927 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.024980 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.024998 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.025009 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.025070 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:02.025061431 +0000 UTC m=+24.414079054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.027094 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f3fb7f5-2925-4714-9e7b-44749885b298-proxy-tls\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.032409 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.032442 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.032456 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:00 crc kubenswrapper[4767]: E0127 15:50:00.032513 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:02.032496206 +0000 UTC m=+24.421513799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.036285 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.036696 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.037369 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.047472 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.050124 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgcn\" (UniqueName: \"kubernetes.io/projected/cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78-kube-api-access-wdgcn\") pod \"multus-zfxc7\" (UID: \"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\") " pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.050782 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxz8\" (UniqueName: \"kubernetes.io/projected/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-kube-api-access-kdxz8\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.055426 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczrn\" (UniqueName: \"kubernetes.io/projected/6f3fb7f5-2925-4714-9e7b-44749885b298-kube-api-access-qczrn\") pod \"machine-config-daemon-mrkmx\" (UID: \"6f3fb7f5-2925-4714-9e7b-44749885b298\") " pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.056687 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnqj\" (UniqueName: \"kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj\") pod \"ovnkube-node-x97k7\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.056973 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f02fb217-0bb2-4720-b223-3e3dcf0cff3f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf2q\" (UID: \"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.059575 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.072720 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.092783 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.103901 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.114376 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.125056 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.133265 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.144022 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.152759 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.158858 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.175121 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.187276 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.197661 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zfxc7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.204808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.331619 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.332579 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.333894 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.334672 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.335961 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.336581 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.337229 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.337834 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.338646 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.339171 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.339874 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.340691 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.341246 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.341745 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.342315 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.342859 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.344135 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.344614 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.345358 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.346525 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.347082 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.348353 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.348921 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.350260 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.350854 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.351563 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.352713 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.353341 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.354544 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.355041 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.356010 4767 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.356142 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.358146 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.358803 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.359744 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.361688 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.362727 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.363944 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.364794 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.366063 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.366726 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.368017 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.368859 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.369940 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.370573 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.371724 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.372705 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.373837 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.374624 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.375298 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.375911 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.377076 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.377860 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.378992 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.405508 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:57:36.199583128 +0000 UTC Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.485353 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"aad581c6d3092293f8654fbcd197e311bd134a859ed2e9d73d4e66e141518e4c"} Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.488126 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cksm8" event={"ID":"3b53edc9-0d4a-4d33-ba63-43a9dc551cef","Type":"ContainerStarted","Data":"bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258"} Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.488151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-cksm8" event={"ID":"3b53edc9-0d4a-4d33-ba63-43a9dc551cef","Type":"ContainerStarted","Data":"00028bdd2b33b4946fa9c6a149f95e683f03d8747fa0abdbe4f7451984be6ff0"} Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.492324 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerStarted","Data":"c9016f4b8ea30fb9b7423e4b67d2ad124cb74fe4d17bb131fab16f10f8f9551e"} Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.494140 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerStarted","Data":"e662f7c0cc91fa54054cf649124f937792c61a4fc5a98465ec9b03556baf18b5"} Jan 27 15:50:00 crc kubenswrapper[4767]: I0127 15:50:00.497286 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"2f93a9ec1ea6230a3e15b164b5bc9fcd6f1e54c5eb1d5fa3eb43c56680597a73"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.324855 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.324926 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.325320 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.325413 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.324934 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.325482 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.396870 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-d66w2"] Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.397170 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.399512 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.399574 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.400293 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.400643 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.406568 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:22:51.962718588 +0000 UTC Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.419062 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.430388 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.441675 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad34d879-c8b8-494a-81e7-69d72a3a48fb-host\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.441763 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpsx\" (UniqueName: \"kubernetes.io/projected/ad34d879-c8b8-494a-81e7-69d72a3a48fb-kube-api-access-sjpsx\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.441789 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad34d879-c8b8-494a-81e7-69d72a3a48fb-serviceca\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.441982 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.459111 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.471506 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.484970 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.499944 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.500584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.502232 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerStarted","Data":"3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.503855 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f" exitCode=0 Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.503900 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.505473 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" exitCode=0 Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.505564 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.507502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.507536 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a"} Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.524707 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.544183 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpsx\" (UniqueName: \"kubernetes.io/projected/ad34d879-c8b8-494a-81e7-69d72a3a48fb-kube-api-access-sjpsx\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.544269 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad34d879-c8b8-494a-81e7-69d72a3a48fb-serviceca\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.544365 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad34d879-c8b8-494a-81e7-69d72a3a48fb-host\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.545678 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad34d879-c8b8-494a-81e7-69d72a3a48fb-host\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.546838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ad34d879-c8b8-494a-81e7-69d72a3a48fb-serviceca\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.547087 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.559753 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.566823 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpsx\" (UniqueName: \"kubernetes.io/projected/ad34d879-c8b8-494a-81e7-69d72a3a48fb-kube-api-access-sjpsx\") pod \"node-ca-d66w2\" (UID: \"ad34d879-c8b8-494a-81e7-69d72a3a48fb\") " pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.570554 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.580979 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.596029 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.610711 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.627428 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.647376 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.661733 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.681410 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.696706 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.711478 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.715342 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d66w2" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.727837 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: W0127 15:50:01.729900 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad34d879_c8b8_494a_81e7_69d72a3a48fb.slice/crio-f958ed57d110122aa026da35d8fc90fa6e687da3a370c8cd9b8f34bd1798464c WatchSource:0}: Error finding container f958ed57d110122aa026da35d8fc90fa6e687da3a370c8cd9b8f34bd1798464c: Status 404 returned error can't find the container with id f958ed57d110122aa026da35d8fc90fa6e687da3a370c8cd9b8f34bd1798464c Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.752838 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.772699 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.787616 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.815412 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.833230 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.855564 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.874155 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:01Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.951776 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:50:01 crc kubenswrapper[4767]: I0127 15:50:01.951972 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.952098 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.952162 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:05.952143261 +0000 UTC m=+28.341160774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:01 crc kubenswrapper[4767]: E0127 15:50:01.952650 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:50:05.952642325 +0000 UTC m=+28.341659848 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.052649 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.052709 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.052737 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052838 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052879 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052908 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:06.052890279 +0000 UTC m=+28.441907802 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052922 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052937 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.052996 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:06.052978101 +0000 UTC m=+28.441995694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.053022 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.053085 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.053109 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:02 crc kubenswrapper[4767]: E0127 15:50:02.053233 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:06.053170607 +0000 UTC m=+28.442188300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.406774 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:38:27.608377532 +0000 UTC Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.516659 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d66w2" event={"ID":"ad34d879-c8b8-494a-81e7-69d72a3a48fb","Type":"ContainerStarted","Data":"ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.516706 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d66w2" event={"ID":"ad34d879-c8b8-494a-81e7-69d72a3a48fb","Type":"ContainerStarted","Data":"f958ed57d110122aa026da35d8fc90fa6e687da3a370c8cd9b8f34bd1798464c"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.519291 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f" exitCode=0 Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.519338 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.526697 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.526761 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.526777 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.534232 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.551234 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.567754 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.581342 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.603715 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.619984 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.635489 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.648743 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.663134 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.679046 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.693719 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.705026 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.720464 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.733149 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.748390 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.767409 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.782911 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.794847 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.806748 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.818739 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.835889 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.851069 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.864343 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.881730 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.898192 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.911330 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.929290 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:02 crc kubenswrapper[4767]: I0127 15:50:02.940293 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:02Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.325277 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.325334 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.325379 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:03 crc kubenswrapper[4767]: E0127 15:50:03.325442 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:03 crc kubenswrapper[4767]: E0127 15:50:03.325616 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:03 crc kubenswrapper[4767]: E0127 15:50:03.325709 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.407557 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:26:00.599533047 +0000 UTC Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.529737 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac" exitCode=0 Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.529815 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac"} Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.533735 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.533781 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.533799 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.541864 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.555184 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.574615 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.586581 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.599584 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.620517 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.632624 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.642639 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.652509 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.665469 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.679062 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.688973 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.697894 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:03 crc kubenswrapper[4767]: I0127 15:50:03.709120 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:03Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.408306 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:27:49.568258944 +0000 UTC Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.540183 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9" exitCode=0 Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.540245 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9"} Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.557047 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.569431 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.580314 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.592362 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.603182 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.616721 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.628960 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.641750 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.660527 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.673792 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.685865 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.688147 4767 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.692216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.692265 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.692277 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.692373 4767 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.697097 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.698553 4767 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.698792 4767 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.700414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.700451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.700463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.700479 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.700491 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.708184 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.710255 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.713139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.713166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.713175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.713189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.713216 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.723349 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.723910 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.729428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.729460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.729469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.729481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.729491 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.740273 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.743403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.743441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.743450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.743463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.743472 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.754074 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.757187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.757238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.757248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.757264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.757274 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.767516 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:04Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:04 crc kubenswrapper[4767]: E0127 15:50:04.767679 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.768989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.769007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.769014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.769028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.769037 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.871600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.871633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.871643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.871658 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.871667 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.974813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.974843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.974851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.974865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:04 crc kubenswrapper[4767]: I0127 15:50:04.974873 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:04Z","lastTransitionTime":"2026-01-27T15:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.076906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.076953 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.076989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.077012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.077025 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.179735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.179768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.179802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.179830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.179839 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.282127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.282176 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.282189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.282401 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.282420 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.324505 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.324517 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.324675 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.324533 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.324911 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.325016 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.385171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.385243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.385254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.385271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.385284 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.409332 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:55:31.571834325 +0000 UTC Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.487416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.487457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.487487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.487504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.487516 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.545807 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3" exitCode=0 Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.545856 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.550665 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.564378 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.576551 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.589689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.589721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.589734 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.589751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.589762 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.592797 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.604333 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.617266 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.628252 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.638490 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.647975 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.659062 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.668924 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.681522 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.691661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.691698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.691707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.691722 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.691731 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.701119 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.714417 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.729913 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:05Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.794462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.794498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.794508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.794523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.794535 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.896546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.896586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.896594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.896626 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.896638 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.991303 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.991402 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.991551 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.991656 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:50:13.991589225 +0000 UTC m=+36.380606768 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:50:05 crc kubenswrapper[4767]: E0127 15:50:05.991750 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:13.991734179 +0000 UTC m=+36.380751792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.998894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.998982 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.998993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.999010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:05 crc kubenswrapper[4767]: I0127 15:50:05.999042 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:05Z","lastTransitionTime":"2026-01-27T15:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.093074 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.093142 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.093249 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093330 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093387 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093407 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093415 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093445 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093463 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093487 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:14.093460434 +0000 UTC m=+36.482477987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093529 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:14.093508906 +0000 UTC m=+36.482526509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093583 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: E0127 15:50:06.093616 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:14.093605489 +0000 UTC m=+36.482623152 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.101008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.101045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.101056 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.101072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.101084 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.203484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.203516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.203524 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.203537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.203546 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.305905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.305954 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.305965 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.305983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.305995 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.408782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.408839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.408851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.408869 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.408881 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.409771 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:40:45.9758894 +0000 UTC Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.479096 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.492883 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.504414 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.511719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.511756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.511765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.511778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.511788 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.517478 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.529014 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.541774 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.566527 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.579369 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.593763 4767 generic.go:334] "Generic (PLEG): container finished" podID="f02fb217-0bb2-4720-b223-3e3dcf0cff3f" containerID="4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851" exitCode=0 Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.593821 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerDied","Data":"4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.599713 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.614252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.614313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.614335 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.614358 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.614377 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.620522 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.634818 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.648771 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.662407 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.676824 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.693359 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.704339 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.714922 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.715933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.715965 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.715973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.715988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.715998 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.727066 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.739500 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.753844 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.766391 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.777264 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.788939 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.806462 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.819087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.819443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.819633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.819791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.819887 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.820258 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.839051 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.856599 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.874249 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.891271 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.922232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.922271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.922280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.922295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:06 crc kubenswrapper[4767]: I0127 15:50:06.922306 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:06Z","lastTransitionTime":"2026-01-27T15:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.024154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.024212 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.024225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.024241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.024253 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.126935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.127326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.127344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.127369 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.127387 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.230115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.230146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.230157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.230172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.230185 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.324976 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.324988 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:07 crc kubenswrapper[4767]: E0127 15:50:07.325139 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.324988 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:07 crc kubenswrapper[4767]: E0127 15:50:07.325234 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:07 crc kubenswrapper[4767]: E0127 15:50:07.325317 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.332507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.332557 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.332568 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.332586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.332597 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.410934 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:02:49.521752536 +0000 UTC Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.434840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.434883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.434913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.434928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.434936 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.538040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.538095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.538119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.538146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.538162 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.603551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" event={"ID":"f02fb217-0bb2-4720-b223-3e3dcf0cff3f","Type":"ContainerStarted","Data":"bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.610569 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.610921 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.611082 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.618758 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.683990 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.686088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.686113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.686121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.686134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.686143 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.688236 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.688359 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.693647 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.703918 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.713463 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.725315 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.737192 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.752710 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.772692 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.785624 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.788058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.788101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.788113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.788128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.788139 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.796756 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.807484 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.820686 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.831932 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.843283 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.854994 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.865351 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.878518 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.890659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.890718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.890743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.890764 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.890778 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.893569 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.905578 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.916143 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.927636 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.938361 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.948995 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.961941 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.973021 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.985823 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:07Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.993346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.993378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.993386 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.993403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:07 crc kubenswrapper[4767]: I0127 15:50:07.993413 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:07Z","lastTransitionTime":"2026-01-27T15:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.002536 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.096160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.096222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.096235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.096250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.096262 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.120521 4767 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.198606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.199030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.199107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.199177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.199276 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.301981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.302016 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.302026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.302040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.302052 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.338937 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.359997 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.379366 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.390231 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.402646 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.404267 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.404317 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.404332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.404349 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.404360 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.411779 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:08:54.094603671 +0000 UTC Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.415705 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.434777 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.449154 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.468679 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.480090 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.493375 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.506369 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.507408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.507454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.507468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.507485 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.507497 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.522290 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.533517 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.609627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.609671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.609683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.609697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.609708 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.613547 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.712423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.712457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.712467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.712486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.712495 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.829933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.829985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.830000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.830017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.830027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.931994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.932036 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.932048 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.932063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:08 crc kubenswrapper[4767]: I0127 15:50:08.932073 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:08Z","lastTransitionTime":"2026-01-27T15:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.034802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.034864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.034882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.034908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.034976 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.137744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.137813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.137830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.137883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.137912 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.240445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.240492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.240512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.240528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.240540 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.324913 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:09 crc kubenswrapper[4767]: E0127 15:50:09.325060 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.325638 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.325746 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:09 crc kubenswrapper[4767]: E0127 15:50:09.325862 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:09 crc kubenswrapper[4767]: E0127 15:50:09.325972 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.342853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.342897 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.342908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.342928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.342940 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.412518 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:25:19.86417718 +0000 UTC Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.446160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.446282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.446299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.446316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.446364 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.549082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.549128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.549140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.549155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.549165 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.615552 4767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.657264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.657302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.657310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.657324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.657333 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.761628 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.761716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.761743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.761772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.761793 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.865280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.865313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.865322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.865335 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.865344 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.967838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.967896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.967914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.967939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:09 crc kubenswrapper[4767]: I0127 15:50:09.967956 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:09Z","lastTransitionTime":"2026-01-27T15:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.069872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.069903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.069911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.069924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.069932 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.172748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.172814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.172831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.172853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.172871 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.275839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.276170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.276183 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.276217 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.276232 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.379353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.379399 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.379413 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.379432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.379446 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.413278 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:59:48.756944876 +0000 UTC Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.481793 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.482075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.482147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.482245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.482305 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.584448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.584494 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.584508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.584527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.584541 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.695821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.695870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.695883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.695901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.695913 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.798131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.798170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.798180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.798196 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.798221 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.901730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.901795 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.901814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.901838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:10 crc kubenswrapper[4767]: I0127 15:50:10.901856 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:10Z","lastTransitionTime":"2026-01-27T15:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.004475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.004520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.004529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.004546 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.004560 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.107072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.107117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.107132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.107148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.107161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.210358 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.210436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.210454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.210482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.210501 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.312799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.312846 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.312858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.312879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.312891 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.325280 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.325336 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.325310 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:11 crc kubenswrapper[4767]: E0127 15:50:11.325433 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:11 crc kubenswrapper[4767]: E0127 15:50:11.325554 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:11 crc kubenswrapper[4767]: E0127 15:50:11.325721 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.413547 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:21:16.881814024 +0000 UTC Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.416090 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.416150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.416164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.416180 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.416190 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.519310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.519347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.519355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.519371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.519382 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.621674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.621745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.621760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.621776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.621791 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.622694 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/0.log" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.629947 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d" exitCode=1 Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.630005 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.630845 4767 scope.go:117] "RemoveContainer" containerID="419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.643738 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.658828 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.679700 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:11Z\\\",\\\"message\\\":\\\"*v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721156 6063 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:50:10.721235 6063 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721154 6063 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:50:10.721259 6063 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 15:50:10.721284 6063 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:50:10.721285 6063 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:50:10.721312 6063 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:50:10.721295 6063 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:50:10.721332 6063 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:50:10.721351 6063 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:50:10.721361 6063 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:50:10.721367 6063 factory.go:656] Stopping watch factory\\\\nI0127 15:50:10.721375 6063 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.698515 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.711769 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.724025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.724077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.724096 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.724123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.725195 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.725913 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.740641 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.756041 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.771307 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.782761 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.793371 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.805908 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.818872 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.827978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.828016 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.828027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.828044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.828058 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.835486 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:11Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.930084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.930128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.930139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.930157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:11 crc kubenswrapper[4767]: I0127 15:50:11.930169 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:11Z","lastTransitionTime":"2026-01-27T15:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.033379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.033431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.033447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.033470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.033486 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.136825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.136874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.136889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.136904 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.136915 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.239084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.239132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.239143 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.239159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.239170 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.342713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.342785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.342807 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.342838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.342861 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.414708 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:40:18.678215858 +0000 UTC Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.446045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.446099 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.446119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.446138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.446151 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.548629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.548677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.548689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.548707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.548720 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.651471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.651526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.651547 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.651569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.651586 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.754142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.754187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.754221 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.754239 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.754251 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.856874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.856908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.856920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.856938 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.856951 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.938863 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64"] Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.939293 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.940809 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.941300 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.952068 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:12Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.958967 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.959005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.959015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.959034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.959045 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:12Z","lastTransitionTime":"2026-01-27T15:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.965140 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:12Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.971070 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2kx\" (UniqueName: \"kubernetes.io/projected/cfb98be5-2dff-40fa-9106-243d23891837-kube-api-access-fl2kx\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.971125 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfb98be5-2dff-40fa-9106-243d23891837-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.971152 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.971174 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.979667 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:12Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:12 crc kubenswrapper[4767]: I0127 15:50:12.996167 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:11Z\\\",\\\"message\\\":\\\"*v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721156 6063 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:50:10.721235 6063 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721154 6063 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:50:10.721259 6063 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 15:50:10.721284 6063 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:50:10.721285 6063 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:50:10.721312 6063 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:50:10.721295 6063 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:50:10.721332 6063 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:50:10.721351 6063 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:50:10.721361 6063 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:50:10.721367 6063 factory.go:656] Stopping watch factory\\\\nI0127 15:50:10.721375 6063 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:12Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.007780 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.017809 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.028603 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.039103 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.050792 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.061566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.061605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.061617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.061634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.061647 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.067336 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.072669 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfb98be5-2dff-40fa-9106-243d23891837-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.072711 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl2kx\" (UniqueName: \"kubernetes.io/projected/cfb98be5-2dff-40fa-9106-243d23891837-kube-api-access-fl2kx\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.072736 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.072757 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.073502 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.073654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cfb98be5-2dff-40fa-9106-243d23891837-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.079297 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.080520 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cfb98be5-2dff-40fa-9106-243d23891837-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.090639 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.101243 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl2kx\" (UniqueName: \"kubernetes.io/projected/cfb98be5-2dff-40fa-9106-243d23891837-kube-api-access-fl2kx\") pod \"ovnkube-control-plane-749d76644c-7hl64\" (UID: \"cfb98be5-2dff-40fa-9106-243d23891837\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.102487 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.115189 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.124532 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.164402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.164433 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.164441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.164454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.164464 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.253911 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.270382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.270420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.270432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.270447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.270459 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.324658 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.324658 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.324806 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.324866 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.325017 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.325137 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.373375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.373434 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.373445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.373482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.373494 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.390151 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.415270 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:03:53.503233392 +0000 UTC Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.475958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.476370 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.476383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.476402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.476414 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.579008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.579067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.579083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.579103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.579114 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.638152 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" event={"ID":"cfb98be5-2dff-40fa-9106-243d23891837","Type":"ContainerStarted","Data":"7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.638253 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" event={"ID":"cfb98be5-2dff-40fa-9106-243d23891837","Type":"ContainerStarted","Data":"db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.638316 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" event={"ID":"cfb98be5-2dff-40fa-9106-243d23891837","Type":"ContainerStarted","Data":"26de3bca830bc13756717e4e557e2476bd7cffc179dad05e974d2589b899253e"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.640497 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/1.log" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.641274 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/0.log" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.647066 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab" exitCode=1 Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.647115 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.647179 4767 scope.go:117] "RemoveContainer" containerID="419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.648135 4767 scope.go:117] "RemoveContainer" containerID="be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.648462 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.657131 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.675692 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r296r"] Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.675897 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.676387 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.676455 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.681923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.681979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.681989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.682003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.682017 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.691691 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.707119 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.725183 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.741991 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.761032 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.777917 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.778346 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqk42\" (UniqueName: \"kubernetes.io/projected/03660290-055d-4f50-be45-3d6d9c023b34-kube-api-access-jqk42\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.778404 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.786762 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.786818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.786829 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.786848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.786859 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.796158 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.810115 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.823878 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.844955 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.863562 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.877922 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.879485 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqk42\" (UniqueName: \"kubernetes.io/projected/03660290-055d-4f50-be45-3d6d9c023b34-kube-api-access-jqk42\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.879533 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.879653 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:13 crc kubenswrapper[4767]: E0127 15:50:13.879702 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:14.379686697 +0000 UTC m=+36.768704220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.889716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.889761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.889773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.889789 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.889798 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.897831 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqk42\" (UniqueName: \"kubernetes.io/projected/03660290-055d-4f50-be45-3d6d9c023b34-kube-api-access-jqk42\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.898618 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:11Z\\\",\\\"message\\\":\\\"*v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721156 6063 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:50:10.721235 6063 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721154 6063 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:50:10.721259 6063 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 15:50:10.721284 6063 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:50:10.721285 6063 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:50:10.721312 6063 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:50:10.721295 6063 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:50:10.721332 6063 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:50:10.721351 6063 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:50:10.721361 6063 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:50:10.721367 6063 factory.go:656] Stopping watch factory\\\\nI0127 15:50:10.721375 6063 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.939275 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.963318 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419f3c0426c437e36f06cc773943819e3672a22276084664b814c7154f6d182d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:11Z\\\",\\\"message\\\":\\\"*v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721156 6063 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 15:50:10.721235 6063 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 15:50:10.721154 6063 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 15:50:10.721259 6063 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 15:50:10.721284 6063 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 15:50:10.721285 6063 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 15:50:10.721312 6063 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 15:50:10.721295 6063 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 15:50:10.721332 6063 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 15:50:10.721351 6063 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 15:50:10.721361 6063 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 15:50:10.721367 6063 factory.go:656] Stopping watch factory\\\\nI0127 15:50:10.721375 6063 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.977297 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.987609 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.991314 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.991383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.991397 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.991416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.991431 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:13Z","lastTransitionTime":"2026-01-27T15:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:13 crc kubenswrapper[4767]: I0127 15:50:13.999281 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.010954 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.032358 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.044794 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.054445 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.066436 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.078577 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.080928 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.081012 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.081131 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:50:30.08109901 +0000 UTC m=+52.470116533 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.081144 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.081246 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:30.081235474 +0000 UTC m=+52.470253067 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.089483 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.093402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.093446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.093458 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.093473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.093485 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.100373 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.111969 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.125980 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.139726 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.182693 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.182766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.182805 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.182931 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.182955 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.182971 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.182984 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183018 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183114 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183153 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183028 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:30.183009172 +0000 UTC m=+52.572026685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183339 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:30.18328142 +0000 UTC m=+52.572298943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.183375 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:30.183364302 +0000 UTC m=+52.572382055 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.196110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.196160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.196172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.196194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.196231 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.298905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.298946 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.298958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.298975 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.298988 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.385093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.385278 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.385343 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:15.385326421 +0000 UTC m=+37.774343944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.400654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.400685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.400695 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.400711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.400722 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.416091 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:58:21.258321002 +0000 UTC Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.503475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.503511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.503520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.503535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.503548 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.605942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.606041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.606057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.606082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.606096 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.652817 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/1.log" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.656943 4767 scope.go:117] "RemoveContainer" containerID="be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab" Jan 27 15:50:14 crc kubenswrapper[4767]: E0127 15:50:14.657368 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.672042 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.685464 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.697451 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.709322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.709569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.709669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.709765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.709848 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.710392 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.727884 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.740969 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.754662 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.768567 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.781761 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.795146 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.809240 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.812847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.812876 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.812885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.812898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.812907 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.820266 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.832954 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.845323 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.854065 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.863587 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:14Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.915657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.915706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.915720 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.915738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:14 crc kubenswrapper[4767]: I0127 15:50:14.915750 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:14Z","lastTransitionTime":"2026-01-27T15:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.018370 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.018456 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.018476 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.018502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.018533 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.076823 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.076866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.076877 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.076900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.076911 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.090249 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.093622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.093661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.093674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.093692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.093702 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.106343 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.110820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.110873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.110881 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.110900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.110915 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.126348 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.130847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.130913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.130925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.130942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.130964 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.142562 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.146491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.146539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.146550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.146567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.146580 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.161404 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:15Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.161521 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.163141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.163177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.163188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.163221 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.163235 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.266377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.266430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.266445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.266465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.266479 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.325315 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.325399 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.325418 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.325594 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.326116 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.326240 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.326312 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.326548 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.368691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.368847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.368913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.368973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.369029 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.395461 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.395766 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:15 crc kubenswrapper[4767]: E0127 15:50:15.395863 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:17.395842286 +0000 UTC m=+39.784859799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.416937 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:44:54.885461448 +0000 UTC Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.471680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.471930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.472009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.472097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.472175 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.574362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.574402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.574414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.574429 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.574438 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.676587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.676627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.676636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.676651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.676661 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.783677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.783718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.783727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.783742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.783756 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.887334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.887400 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.887416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.887434 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.887446 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.990352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.990392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.990404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.990420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:15 crc kubenswrapper[4767]: I0127 15:50:15.990430 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:15Z","lastTransitionTime":"2026-01-27T15:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.093794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.093837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.093849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.093864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.093875 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.197421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.197472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.197486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.197504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.197514 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.299482 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.299527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.299538 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.299553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.299565 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.402001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.402047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.402059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.402075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.402091 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.417411 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:53:41.84268234 +0000 UTC Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.503926 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.503986 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.504004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.504075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.504095 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.606529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.606886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.606984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.607080 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.607168 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.710120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.710171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.710185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.710231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.710244 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.812857 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.812905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.812918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.812935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.812947 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.915328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.915365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.915375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.915392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:16 crc kubenswrapper[4767]: I0127 15:50:16.915404 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:16Z","lastTransitionTime":"2026-01-27T15:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.017503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.017535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.017544 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.017561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.017573 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.119517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.119561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.119571 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.119585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.119596 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.222320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.222382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.222394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.222412 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.222422 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.324865 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.324907 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.324951 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325139 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.325157 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.325299 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325391 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.325422 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.325434 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.325526 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.417659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.417677 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:31:32.205211898 +0000 UTC Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.417819 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:17 crc kubenswrapper[4767]: E0127 15:50:17.417914 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:21.417895045 +0000 UTC m=+43.806912568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.428136 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.428179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.428190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.428225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.428239 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.530707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.530749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.530765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.530786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.530796 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.633341 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.633382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.633392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.633406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.633416 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.735921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.735974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.735986 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.736009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.736021 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.838521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.838568 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.838581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.838599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.838610 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.941142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.941181 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.941195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.941225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:17 crc kubenswrapper[4767]: I0127 15:50:17.941235 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:17Z","lastTransitionTime":"2026-01-27T15:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.044417 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.044444 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.044453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.044465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.044475 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.147003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.147043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.147062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.147083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.147093 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.249339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.249382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.249392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.249407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.249418 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.341150 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.351342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.351370 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.351378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.351390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.351399 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.355477 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.367147 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.379410 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.393068 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.405384 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.417839 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:24:08.134093653 +0000 UTC Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.420300 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.432230 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.443420 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.454244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.454310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.454322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.454336 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.454345 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.455108 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.466325 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.480639 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.494901 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.507916 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.520614 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.538217 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.557332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.557389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.557400 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.557437 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.557453 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.660228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.660274 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.660300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.660316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.660328 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.762718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.762769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.762781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.762799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.762810 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.865927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.865977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.865987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.866003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.866013 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.969403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.969483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.969509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.969557 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:18 crc kubenswrapper[4767]: I0127 15:50:18.969579 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:18Z","lastTransitionTime":"2026-01-27T15:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.072227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.072266 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.072277 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.072297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.072308 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.175175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.175235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.175246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.175261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.175274 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.278381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.278432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.278443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.278459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.278472 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.325003 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.325028 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.325028 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.325135 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:19 crc kubenswrapper[4767]: E0127 15:50:19.325280 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:19 crc kubenswrapper[4767]: E0127 15:50:19.325417 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:19 crc kubenswrapper[4767]: E0127 15:50:19.325518 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:19 crc kubenswrapper[4767]: E0127 15:50:19.325814 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.381564 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.381608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.381619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.381636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.381649 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.417979 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:25:35.46305953 +0000 UTC Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.483850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.484154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.484271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.484363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.484437 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.587516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.587558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.587570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.587583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.587595 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.690752 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.690822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.690831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.690845 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.690855 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.793342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.793422 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.793442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.793470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.793491 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.896126 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.896186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.896231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.896253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.896267 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.998883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.998918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.998926 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.998940 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:19 crc kubenswrapper[4767]: I0127 15:50:19.998948 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:19Z","lastTransitionTime":"2026-01-27T15:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.101509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.101573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.101586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.101607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.101619 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.204634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.204936 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.205044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.205117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.205181 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.307518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.307825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.307834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.307851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.307864 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.410722 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.410778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.410790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.410809 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.410822 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.419123 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:24:32.580747408 +0000 UTC Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.513469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.513529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.513540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.513563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.513578 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.616398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.616464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.616480 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.616502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.616518 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.718855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.718905 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.718919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.718939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.718955 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.821484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.821529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.821540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.821556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.821568 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.924385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.924424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.924435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.924451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:20 crc kubenswrapper[4767]: I0127 15:50:20.924464 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:20Z","lastTransitionTime":"2026-01-27T15:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.027290 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.027351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.027377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.027407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.027430 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.129954 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.129990 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.129999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.130019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.130030 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.233515 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.233568 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.233591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.233618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.233645 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.324899 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.324960 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.325063 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.325057 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.325130 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.325275 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.325354 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.325407 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.336227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.336267 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.336276 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.336292 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.336303 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.420066 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:34:37.597156732 +0000 UTC Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.439520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.439776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.439878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.439976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.440043 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.459318 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.459359 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:21 crc kubenswrapper[4767]: E0127 15:50:21.459438 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:29.459413599 +0000 UTC m=+51.848431132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.543535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.543605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.543620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.543641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.543660 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.646847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.647196 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.647327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.647443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.647514 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.749751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.749808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.749820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.749838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.749851 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.851660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.851702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.851713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.851728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.851739 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.954770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.954819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.954829 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.954845 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:21 crc kubenswrapper[4767]: I0127 15:50:21.954857 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:21Z","lastTransitionTime":"2026-01-27T15:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.058274 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.058351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.058366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.058382 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.058419 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.161802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.161832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.161841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.161855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.161864 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.265254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.265308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.265320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.265338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.265350 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.368356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.368644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.368755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.368861 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.368961 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.420987 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:00:25.286732726 +0000 UTC Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.470469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.470521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.470529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.470541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.470551 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.572556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.572813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.573014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.573172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.573286 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.675347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.675418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.675435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.675465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.675482 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.778113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.778402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.778512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.778593 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.778808 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.882348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.882394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.882408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.882427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.882463 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.985605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.985680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.985706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.985734 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:22 crc kubenswrapper[4767]: I0127 15:50:22.985752 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:22Z","lastTransitionTime":"2026-01-27T15:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.087829 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.087874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.087891 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.087913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.087930 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.190648 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.190688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.190697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.190712 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.190722 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.293790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.293838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.293849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.293867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.293878 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.324451 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:23 crc kubenswrapper[4767]: E0127 15:50:23.324575 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.324911 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:23 crc kubenswrapper[4767]: E0127 15:50:23.324961 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.324996 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:23 crc kubenswrapper[4767]: E0127 15:50:23.325038 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.325066 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:23 crc kubenswrapper[4767]: E0127 15:50:23.325102 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.396826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.396859 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.396870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.396885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.396894 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.422079 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:49:12.490200493 +0000 UTC Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.499511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.499582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.499603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.499631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.499667 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.602238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.602290 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.602306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.602326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.602345 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.705070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.705114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.705127 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.705144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.705155 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.808321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.808378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.808395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.808416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.808428 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.910973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.911028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.911044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.911067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:23 crc kubenswrapper[4767]: I0127 15:50:23.911084 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:23Z","lastTransitionTime":"2026-01-27T15:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.013726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.013761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.013773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.013791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.013803 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.116868 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.116926 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.116938 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.116966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.116980 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.220065 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.220135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.220159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.220189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.220217 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.322247 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.322293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.322307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.322325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.322336 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.422322 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:43:44.931932072 +0000 UTC Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.425194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.425285 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.425311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.425340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.425364 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.528050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.528108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.528123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.528141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.528156 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.630902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.630963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.630972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.630989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.630999 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.733593 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.733661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.733674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.733751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.733766 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.836729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.836813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.836830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.836850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.836865 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.940081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.940157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.940168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.940184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:24 crc kubenswrapper[4767]: I0127 15:50:24.940193 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:24Z","lastTransitionTime":"2026-01-27T15:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.043550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.043607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.043618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.043636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.043650 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.146673 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.146745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.146755 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.146768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.146776 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.249910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.249963 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.249994 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.250016 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.250027 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.302637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.302671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.302682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.302697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.302709 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.320695 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:25Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.324081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.324113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.324124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.324138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.324146 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.325877 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.325958 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.326033 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.326098 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.326120 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.326149 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.326319 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.326430 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.336284 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:25Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.340194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.340250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.340263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.340282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.340294 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.354549 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:25Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.358078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.358132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.358143 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.358158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.358170 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.370609 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:25Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.373489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.373512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.373520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.373532 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.373540 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.386695 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:25Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:25 crc kubenswrapper[4767]: E0127 15:50:25.386860 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.388660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.388696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.388708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.388724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.388737 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.422742 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:44:34.507217029 +0000 UTC Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.491157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.491228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.491244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.491263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.491276 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.593555 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.593865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.594174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.594523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.594765 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.697306 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.697389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.697411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.697442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.697463 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.800028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.800071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.800084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.800102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.800114 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.902501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.902543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.902552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.902567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:25 crc kubenswrapper[4767]: I0127 15:50:25.902578 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:25Z","lastTransitionTime":"2026-01-27T15:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.005381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.005445 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.005463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.005486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.005503 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.108380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.108430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.108442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.108462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.108475 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.210634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.210705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.210719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.210740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.210753 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.313575 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.313625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.313635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.313649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.313658 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.326060 4767 scope.go:117] "RemoveContainer" containerID="be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.416034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.416067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.416076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.416089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.416099 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.423281 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:16:16.310386506 +0000 UTC Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.518698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.518748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.518758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.518770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.518778 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.621631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.621680 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.621689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.621704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.621715 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.698579 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/1.log" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.702109 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.702539 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.716741 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.724423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.724453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.724462 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.724478 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.724488 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.730995 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.742913 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.764582 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.781175 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.794700 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.806156 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.819169 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.826666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.826704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.826716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.826731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.826744 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.835476 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.849621 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.867788 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.884485 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.902128 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.918841 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.930098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.930156 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.930169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.930191 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.930226 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:26Z","lastTransitionTime":"2026-01-27T15:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.935332 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:26 crc kubenswrapper[4767]: I0127 15:50:26.956144 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:26Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.032407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.032442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.032454 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.032472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.032483 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.134479 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.134522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.134534 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.134549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.134560 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.237325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.237373 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.237385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.237404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.237417 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.325338 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.325401 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.325358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.325358 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:27 crc kubenswrapper[4767]: E0127 15:50:27.325482 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:27 crc kubenswrapper[4767]: E0127 15:50:27.325524 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:27 crc kubenswrapper[4767]: E0127 15:50:27.325671 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:27 crc kubenswrapper[4767]: E0127 15:50:27.325766 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.339788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.339858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.339872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.339894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.339908 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.424080 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:36:38.023441069 +0000 UTC Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.442323 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.442377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.442392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.442411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.442424 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.545336 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.545394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.545407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.545426 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.545457 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.647646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.647716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.647784 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.647816 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.647840 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.706710 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/2.log" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.707447 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/1.log" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.709789 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" exitCode=1 Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.709834 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.709883 4767 scope.go:117] "RemoveContainer" containerID="be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.710511 4767 scope.go:117] "RemoveContainer" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" Jan 27 15:50:27 crc kubenswrapper[4767]: E0127 15:50:27.710688 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.727543 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.743691 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.750354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.750403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.750416 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.750432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.750444 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.762786 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.781624 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.794192 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.804431 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.814177 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.826809 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.838694 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.852033 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.852068 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.852076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.852089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.852100 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.855096 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.866790 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.876195 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.884966 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.895799 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.905311 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.913713 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:27Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.954040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.954105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.954120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.954138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:27 crc kubenswrapper[4767]: I0127 15:50:27.954187 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:27Z","lastTransitionTime":"2026-01-27T15:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.056340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.056381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.056392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.056415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.056425 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.158363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.158408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.158419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.158436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.158447 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.261109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.261147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.261159 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.261174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.261184 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.346604 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.359521 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.363484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.363534 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.363550 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.363571 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.363587 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.372649 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.389282 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6a4c64f5d858129da0e6c2dc7166454971b0a3d9c44de2233ac4d0a93f14ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"controller: failed to start default network controller: could not add Event Handler for eqInformer during egressqosController initialization, handler {0x21cfc60 0x21cf940 0x21cf8e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:13Z is after 2025-08-24T17:21:41Z]\\\\nI0127 15:50:13.353414 6213 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.399786 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.412144 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.424362 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:32:16.829667943 +0000 UTC Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.428878 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.443822 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.461599 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.465629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.465671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.465683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.465696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.465706 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.474672 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.486360 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.497583 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.508183 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.520498 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.535301 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.546235 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.567995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.568035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.568047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.568064 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.568076 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.670044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.670092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.670103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.670117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.670128 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.714860 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/2.log" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.720660 4767 scope.go:117] "RemoveContainer" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" Jan 27 15:50:28 crc kubenswrapper[4767]: E0127 15:50:28.721196 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.733654 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.746725 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.758814 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774192 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.774852 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.788404 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.803378 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.821442 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.833335 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.847580 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.861352 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.877471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.877513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.877524 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.877545 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.877557 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.878151 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.892193 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.903583 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.914864 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.923395 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.932716 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:28Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.980107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.980383 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.980520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.980652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:28 crc kubenswrapper[4767]: I0127 15:50:28.980775 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:28Z","lastTransitionTime":"2026-01-27T15:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.083458 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.083608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.083619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.083632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.083643 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.186662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.186734 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.186768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.186795 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.186815 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.277727 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.284904 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.289999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.290049 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.290060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.290076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.290089 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.295483 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.306464 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.317401 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.324628 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.324655 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.324688 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.324741 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.324777 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.324839 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.324924 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.325030 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.333476 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.346346 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.358429 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.377038 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.389043 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.392289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.392351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.392377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.392400 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.392413 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.424815 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:58:23.853188013 +0000 UTC Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.441741 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.461563 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.473455 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.487763 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.494988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.495039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.495052 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.495071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.495086 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.502048 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.512708 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.525760 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.537044 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:29Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.539523 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.539857 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:29 crc kubenswrapper[4767]: E0127 15:50:29.540015 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:50:45.539982805 +0000 UTC m=+67.929000348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.597629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.597668 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.597676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.597689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.597700 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.700412 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.700489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.700511 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.700541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.700563 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.802867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.802937 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.802959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.802987 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.803008 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.906061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.906112 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.906128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.906149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:29 crc kubenswrapper[4767]: I0127 15:50:29.906167 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:29Z","lastTransitionTime":"2026-01-27T15:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.008512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.008565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.008579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.008597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.008624 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.110928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.111002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.111012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.111027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.111036 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.145379 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.145556 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:51:02.145530942 +0000 UTC m=+84.534548465 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.145598 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.145742 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.145786 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:51:02.145779469 +0000 UTC m=+84.534796982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.213548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.213611 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.213624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.213643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.213657 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.246661 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.246752 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.246815 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247016 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247167 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247215 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247332 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:51:02.247309059 +0000 UTC m=+84.636326592 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247123 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247507 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247595 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247664 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247572 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:51:02.247526225 +0000 UTC m=+84.636543748 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:50:30 crc kubenswrapper[4767]: E0127 15:50:30.247868 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:51:02.247845685 +0000 UTC m=+84.636863208 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.316442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.316768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.316843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.316923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.317007 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.419414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.419477 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.419488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.419509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.419520 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.425737 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:14:25.807396249 +0000 UTC Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.521888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.521937 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.521949 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.521966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.521978 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.625102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.625157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.625171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.625189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.625223 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.727758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.727806 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.727817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.727834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.727847 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.830563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.830638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.830655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.830677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.830691 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.932868 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.932907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.932917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.932933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:30 crc kubenswrapper[4767]: I0127 15:50:30.932944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:30Z","lastTransitionTime":"2026-01-27T15:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.036559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.036634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.036652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.036676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.036696 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.138765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.138863 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.138881 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.138931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.138941 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.241258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.241298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.241312 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.241329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.241345 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.324473 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.324531 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.324576 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.324489 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:31 crc kubenswrapper[4767]: E0127 15:50:31.324668 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:31 crc kubenswrapper[4767]: E0127 15:50:31.324775 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:31 crc kubenswrapper[4767]: E0127 15:50:31.324869 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:31 crc kubenswrapper[4767]: E0127 15:50:31.324961 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.343146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.343214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.343233 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.343250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.343263 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.426942 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:41:56.407271433 +0000 UTC Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.446824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.446882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.446892 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.446910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.446925 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.549664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.549749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.549761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.549797 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.549811 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.652134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.652165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.652176 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.652193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.652230 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.755088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.755282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.755369 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.755485 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.755587 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.860640 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.860973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.861074 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.861736 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.861830 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.965041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.965088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.965103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.965120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:31 crc kubenswrapper[4767]: I0127 15:50:31.965131 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:31Z","lastTransitionTime":"2026-01-27T15:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.068126 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.068173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.068206 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.068241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.068253 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.170581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.170850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.170982 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.171077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.171161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.272888 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.272921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.272939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.272957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.272969 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.375574 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.375813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.375895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.375956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.376045 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.427457 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:37:08.463595144 +0000 UTC Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.478758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.478796 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.478810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.478826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.478837 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.582552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.582600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.582613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.582631 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.582644 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.685430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.685473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.685484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.685500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.685513 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.787259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.787711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.787848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.788007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.788093 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.891534 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.891595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.891616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.891636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.891648 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.994359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.994418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.994429 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.994447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:32 crc kubenswrapper[4767]: I0127 15:50:32.994458 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:32Z","lastTransitionTime":"2026-01-27T15:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.096960 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.097001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.097009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.097024 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.097037 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.199147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.199231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.199242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.199260 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.199272 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.302268 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.302321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.302331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.302351 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.302361 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.324731 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.324738 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.324785 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:33 crc kubenswrapper[4767]: E0127 15:50:33.325256 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:33 crc kubenswrapper[4767]: E0127 15:50:33.324944 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.324800 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:33 crc kubenswrapper[4767]: E0127 15:50:33.325476 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:33 crc kubenswrapper[4767]: E0127 15:50:33.325311 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.404911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.404961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.404971 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.405002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.405012 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.428642 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:36:46.605560172 +0000 UTC Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.508301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.508342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.508357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.508378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.508393 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.610920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.610962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.610974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.610991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.611009 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.713026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.713083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.713097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.713119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.713134 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.815616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.815664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.815681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.815727 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.815739 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.918670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.918710 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.918721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.918739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:33 crc kubenswrapper[4767]: I0127 15:50:33.918750 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:33Z","lastTransitionTime":"2026-01-27T15:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.021043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.021113 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.021137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.021166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.021191 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.123524 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.123580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.123590 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.123604 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.123613 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.225592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.225660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.225681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.225711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.225731 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.329769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.329817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.329826 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.329839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.329849 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.429648 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:01:14.203520839 +0000 UTC Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.432552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.432607 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.432620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.432639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.432650 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.535589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.535632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.535652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.535670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.535683 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.638194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.638259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.638273 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.638288 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.638299 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.740560 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.740612 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.740625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.740646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.740961 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.843227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.843273 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.843284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.843301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.843311 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.946301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.946605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.946642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.946697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:34 crc kubenswrapper[4767]: I0127 15:50:34.946710 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:34Z","lastTransitionTime":"2026-01-27T15:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.049603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.049647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.049661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.049684 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.049698 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.152957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.153017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.153038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.153063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.153127 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.255668 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.255723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.255735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.255754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.255768 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.324794 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.324848 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.324812 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.324803 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.324920 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.325074 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.324990 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.325149 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.358166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.358249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.358282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.358300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.358310 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.430290 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:36:26.496166689 +0000 UTC Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.461644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.461691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.461706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.461722 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.461736 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.564597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.564644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.564654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.564676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.564693 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.663346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.663407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.663423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.663443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.663467 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.674673 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.678235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.678267 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.678276 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.678311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.678325 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.692034 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.696109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.696163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.696175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.696192 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.696239 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.707360 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.710610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.710664 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.710674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.710698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.710741 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.720805 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.724166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.724214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.724227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.724245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.724255 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.743357 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:35Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:35 crc kubenswrapper[4767]: E0127 15:50:35.743473 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.744804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.744830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.744838 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.744851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.744861 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.847647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.847733 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.847746 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.847768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.847806 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.951371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.951433 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.951441 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.951471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:35 crc kubenswrapper[4767]: I0127 15:50:35.951480 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:35Z","lastTransitionTime":"2026-01-27T15:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.053624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.053669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.053683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.053700 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.053713 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.157467 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.157553 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.157586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.157619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.157642 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.260683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.260737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.260747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.260764 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.260776 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.363042 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.363095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.363107 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.363123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.363136 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.430835 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:13:24.263251521 +0000 UTC Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.466950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.467000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.467008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.467027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.467037 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.569422 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.569465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.569479 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.569493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.569501 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.672411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.672481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.672495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.672514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.672526 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.774563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.774591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.774601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.774613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.774623 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.877519 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.877573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.877591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.877613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.877633 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.980824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.980882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.980896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.980918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:36 crc kubenswrapper[4767]: I0127 15:50:36.980933 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:36Z","lastTransitionTime":"2026-01-27T15:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.083371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.083424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.083434 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.083449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.083459 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.185763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.185817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.185831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.185848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.185857 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.288521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.288570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.288583 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.288602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.288612 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.325377 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:37 crc kubenswrapper[4767]: E0127 15:50:37.325505 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.325591 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.325686 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.325798 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:37 crc kubenswrapper[4767]: E0127 15:50:37.325840 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:37 crc kubenswrapper[4767]: E0127 15:50:37.325937 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:37 crc kubenswrapper[4767]: E0127 15:50:37.326025 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.391491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.391537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.391549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.391567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.391577 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.431499 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:53:50.777041519 +0000 UTC Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.494901 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.494947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.494961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.494983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.494994 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.597825 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.597871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.597882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.597912 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.597927 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.700944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.700984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.700992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.701004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.701013 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.803165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.803198 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.803219 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.803231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.803243 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.905778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.905824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.905840 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.905863 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:37 crc kubenswrapper[4767]: I0127 15:50:37.905879 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:37Z","lastTransitionTime":"2026-01-27T15:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.007638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.007674 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.007682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.007694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.007702 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.109701 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.109738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.109749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.109765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.109776 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.211910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.211945 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.211954 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.211968 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.211978 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.315348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.315491 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.315507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.315523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.315537 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.337080 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.348408 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.360672 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.373441 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.383560 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.395486 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.407613 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.417481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.417516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.417527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.417541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.417553 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.431650 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:47:44.537314204 +0000 UTC Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.437329 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.450925 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.461945 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.474671 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.487165 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.498660 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.512652 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.520286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.520335 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.520347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.520363 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.520376 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.525748 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.536157 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.549785 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:38Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.627341 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.627379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.627388 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.627402 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.627412 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.729691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.729728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.729738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.729753 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.729762 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.831852 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.831885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.831895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.831911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.831922 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.934820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.934871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.934880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.934897 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:38 crc kubenswrapper[4767]: I0127 15:50:38.934908 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:38Z","lastTransitionTime":"2026-01-27T15:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.037714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.037757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.037768 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.037785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.037796 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.140089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.140132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.140140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.140154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.140164 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.242992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.243054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.243066 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.243084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.243095 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.325019 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:39 crc kubenswrapper[4767]: E0127 15:50:39.325153 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.325194 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.325185 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:39 crc kubenswrapper[4767]: E0127 15:50:39.325299 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.325320 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:39 crc kubenswrapper[4767]: E0127 15:50:39.325448 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:39 crc kubenswrapper[4767]: E0127 15:50:39.325779 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.346187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.346280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.346307 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.346327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.346339 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.432337 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:57:44.724839544 +0000 UTC Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.449582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.449644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.449661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.449685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.449702 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.552017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.552050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.552058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.552070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.552080 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.655955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.656034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.656072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.656092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.656104 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.759390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.759459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.759471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.759487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.759501 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.862338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.862396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.862409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.862427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.862439 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.965551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.965597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.965606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.965624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:39 crc kubenswrapper[4767]: I0127 15:50:39.965635 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:39Z","lastTransitionTime":"2026-01-27T15:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.068020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.068079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.068094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.068135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.068147 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.170627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.170671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.170681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.170699 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.170711 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.272551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.272579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.272587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.272613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.272623 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.374714 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.374831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.374848 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.374895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.374911 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.433711 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:38:04.40587297 +0000 UTC Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.477706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.477766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.477777 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.477796 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.477812 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.580969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.581023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.581034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.581051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.581061 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.684520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.684577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.684589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.684637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.684652 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.786707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.786750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.786758 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.786774 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.786785 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.889880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.889927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.889939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.889957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.889972 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.993344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.993391 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.993403 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.993420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:40 crc kubenswrapper[4767]: I0127 15:50:40.993432 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:40Z","lastTransitionTime":"2026-01-27T15:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.095677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.095718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.095726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.095744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.095755 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.197585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.197632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.197644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.197660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.197672 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.300387 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.300433 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.300442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.300457 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.300467 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.324526 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.324564 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.324615 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.324524 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:41 crc kubenswrapper[4767]: E0127 15:50:41.324658 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:41 crc kubenswrapper[4767]: E0127 15:50:41.324750 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:41 crc kubenswrapper[4767]: E0127 15:50:41.324867 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:41 crc kubenswrapper[4767]: E0127 15:50:41.324922 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.402193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.402251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.402262 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.402277 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.402289 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.433835 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:24:15.480229197 +0000 UTC Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.504979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.505018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.505031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.505047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.505061 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.607377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.607434 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.607451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.607474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.607493 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.710526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.710567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.710576 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.710591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.710601 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.812999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.813037 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.813047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.813062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.813073 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.915108 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.915152 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.915171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.915193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:41 crc kubenswrapper[4767]: I0127 15:50:41.915229 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:41Z","lastTransitionTime":"2026-01-27T15:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.018012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.018051 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.018062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.018078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.018091 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.120355 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.120558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.120573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.120599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.120610 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.223232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.223274 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.223285 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.223301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.223313 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.325985 4767 scope.go:117] "RemoveContainer" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" Jan 27 15:50:42 crc kubenswrapper[4767]: E0127 15:50:42.326178 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.326411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.326436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.326443 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.326453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.326463 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.428630 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.428666 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.428677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.428692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.428703 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.434922 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:30:26.866330505 +0000 UTC Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.531425 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.531466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.531475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.531492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.531501 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.636588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.636634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.636651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.636667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.636677 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.739326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.739365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.739376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.739392 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.739402 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.842061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.842103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.842116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.842132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.842143 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.944420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.944484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.944494 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.944507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:42 crc kubenswrapper[4767]: I0127 15:50:42.944516 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:42Z","lastTransitionTime":"2026-01-27T15:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.047112 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.047144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.047153 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.047166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.047176 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.148746 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.148778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.148787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.148801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.148810 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.251530 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.251585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.251602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.251620 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.251633 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.325475 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.325531 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.325540 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.325576 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:43 crc kubenswrapper[4767]: E0127 15:50:43.325589 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:43 crc kubenswrapper[4767]: E0127 15:50:43.325754 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:43 crc kubenswrapper[4767]: E0127 15:50:43.325746 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:43 crc kubenswrapper[4767]: E0127 15:50:43.325831 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.354047 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.354081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.354089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.354103 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.354112 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.435035 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:38:03.468132487 +0000 UTC Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.456872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.456912 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.456921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.456936 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.456947 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.559831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.559872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.559886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.559906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.559920 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.662498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.662569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.662582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.662599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.662631 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.764522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.764578 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.764595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.764613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.764625 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.867239 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.867284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.867299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.867316 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.867328 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.971071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.971137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.971149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.971169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:43 crc kubenswrapper[4767]: I0127 15:50:43.971180 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:43Z","lastTransitionTime":"2026-01-27T15:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.073535 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.073573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.073584 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.073600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.073611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.185651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.185697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.185708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.185723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.185737 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.288984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.289054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.289068 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.289090 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.289120 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.391769 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.391821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.391834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.391852 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.392188 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.435904 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:02:39.064090536 +0000 UTC Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.494303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.494334 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.494345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.494357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.494365 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.597856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.597913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.597927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.597944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.597963 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.700877 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.700947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.700958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.700979 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.700990 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.803291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.803340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.803348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.803365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.803377 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.906140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.906225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.906238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.906259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:44 crc kubenswrapper[4767]: I0127 15:50:44.906274 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:44Z","lastTransitionTime":"2026-01-27T15:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.008675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.008737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.008748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.008767 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.008779 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.111080 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.111134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.111150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.111169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.111183 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.213016 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.213060 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.213071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.213087 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.213100 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.318818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.319126 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.319252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.319348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.319434 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.324808 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.324885 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.324930 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.324986 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.325039 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.325178 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.325175 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.325493 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.422577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.422643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.422657 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.422684 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.422697 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.436043 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:09:20.531155296 +0000 UTC Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.526102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.526138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.526149 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.526165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.526177 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.608985 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.609135 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:45 crc kubenswrapper[4767]: E0127 15:50:45.609720 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:51:17.609703883 +0000 UTC m=+99.998721406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.628129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.628179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.628190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.628231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.628244 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.730754 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.731006 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.731070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.731138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.731247 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.834326 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.834563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.834625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.834685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.834764 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.937273 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.937527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.937638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.937733 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:45 crc kubenswrapper[4767]: I0127 15:50:45.937817 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:45Z","lastTransitionTime":"2026-01-27T15:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.035196 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.035473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.035537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.035600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.035656 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.047675 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.051331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.051463 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.051522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.051610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.051677 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.067963 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.072116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.072166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.072179 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.072210 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.072222 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.084535 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.089269 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.089320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.089332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.089352 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.089362 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.103327 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.107908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.108160 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.108301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.108407 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.108507 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.126498 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:46Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:46 crc kubenswrapper[4767]: E0127 15:50:46.127319 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.129329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.129437 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.129505 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.129582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.129664 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.232122 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.232164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.232178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.232229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.232244 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.334867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.335145 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.335327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.335450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.335550 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.436493 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:38:44.179976802 +0000 UTC Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.438058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.438092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.438104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.438119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.438132 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.540577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.540637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.540656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.540669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.540684 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.642822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.642853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.642863 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.642879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.642890 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.745492 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.745551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.745561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.745577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.745587 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.848250 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.848293 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.848304 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.848320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.848331 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.950424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.950498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.950517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.950539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:46 crc kubenswrapper[4767]: I0127 15:50:46.950550 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:46Z","lastTransitionTime":"2026-01-27T15:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.053431 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.053474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.053484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.053510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.053520 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.156142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.156258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.156268 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.156282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.156294 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.259186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.259259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.259310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.259332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.259344 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.324689 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.324723 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.324798 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:47 crc kubenswrapper[4767]: E0127 15:50:47.324832 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.324699 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:47 crc kubenswrapper[4767]: E0127 15:50:47.324974 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:47 crc kubenswrapper[4767]: E0127 15:50:47.325016 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:47 crc kubenswrapper[4767]: E0127 15:50:47.325061 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.361993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.362030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.362041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.362059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.362071 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.437646 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:32:26.317739691 +0000 UTC Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.464269 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.464314 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.464324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.464339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.464350 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.566566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.566594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.566602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.566614 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.566623 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.668774 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.668810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.668819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.668833 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.668842 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.772288 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.772322 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.772331 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.772343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.772353 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.874791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.874853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.874866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.874887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.874898 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.977098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.977131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.977140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.977152 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:47 crc kubenswrapper[4767]: I0127 15:50:47.977162 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:47Z","lastTransitionTime":"2026-01-27T15:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.079937 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.080003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.080015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.080031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.080042 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.182411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.182464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.182477 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.182525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.182539 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.284834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.284878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.284887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.284906 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.284918 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.342194 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.359550 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.376276 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.388173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.388241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.388259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.388276 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.388287 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.389646 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.416748 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.431007 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.438034 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:21:42.511772202 +0000 UTC Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.443156 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.454813 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.469325 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.482473 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.491649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.491684 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.491693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.491709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.491721 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.493722 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.506429 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.518614 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.530357 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.544176 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.556107 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.566835 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.595020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.595097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.595161 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.595187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.595218 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.697289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.697330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.697347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.697364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.697373 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.776813 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/0.log" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.776858 4767 generic.go:334] "Generic (PLEG): container finished" podID="cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78" containerID="3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d" exitCode=1 Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.776908 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerDied","Data":"3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.777582 4767 scope.go:117] "RemoveContainer" containerID="3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.791946 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.803834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.803874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.803884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.803899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.803911 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.808618 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.820425 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.838418 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.848433 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.860590 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.873848 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.886377 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.898049 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.906026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.906061 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.906069 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.906082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.906091 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:48Z","lastTransitionTime":"2026-01-27T15:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.909640 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.921092 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.933301 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.944686 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.957221 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.968797 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.979075 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:48 crc kubenswrapper[4767]: I0127 15:50:48.990972 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:48Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.008284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.008340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.008356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.008375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.008389 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.111101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.111163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.111178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.111194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.111225 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.214109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.214163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.214171 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.214185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.214195 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.316724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.316770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.316782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.316796 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.316807 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.325024 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.325071 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.325051 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.325049 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:49 crc kubenswrapper[4767]: E0127 15:50:49.325158 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:49 crc kubenswrapper[4767]: E0127 15:50:49.325318 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:49 crc kubenswrapper[4767]: E0127 15:50:49.325375 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:49 crc kubenswrapper[4767]: E0127 15:50:49.325446 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.418962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.419001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.419011 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.419027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.419040 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.438138 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:34:12.005890361 +0000 UTC Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.521323 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.521360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.521369 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.521384 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.521393 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.624377 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.624430 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.624442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.624459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.624472 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.727147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.727178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.727186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.727217 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.727228 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.782080 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/0.log" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.782145 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerStarted","Data":"3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.803125 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.820011 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.829936 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.829985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.829998 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.830015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.830028 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.833838 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.846260 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.857837 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.870970 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.886990 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.903775 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.916953 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.931563 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.933174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.933245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.933264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.933286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.933321 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:49Z","lastTransitionTime":"2026-01-27T15:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.944160 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.957263 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.970431 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.982170 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:49 crc kubenswrapper[4767]: I0127 15:50:49.994743 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:49Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.006412 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.016390 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:50Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.036442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.036503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.036517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.036537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.036551 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.139090 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.139131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.139140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.139155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.139165 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.241455 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.241493 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.241501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.241518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.241527 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.344109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.344165 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.344177 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.344216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.344231 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.439120 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:05:32.866000851 +0000 UTC Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.447849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.447902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.447914 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.447931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.447944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.550236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.550299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.550312 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.550330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.550340 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.653904 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.653985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.654001 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.654022 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.654042 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.756781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.756835 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.756852 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.756887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.756904 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.860884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.860922 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.860931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.860947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.860958 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.963909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.963950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.963961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.963977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:50 crc kubenswrapper[4767]: I0127 15:50:50.963986 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:50Z","lastTransitionTime":"2026-01-27T15:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.065728 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.065773 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.065786 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.065804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.065817 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.168074 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.168135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.168157 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.168186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.168239 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.271029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.271078 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.271090 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.271109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.271120 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.324586 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.324619 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.324638 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.324603 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:51 crc kubenswrapper[4767]: E0127 15:50:51.324804 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:51 crc kubenswrapper[4767]: E0127 15:50:51.324734 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:51 crc kubenswrapper[4767]: E0127 15:50:51.324935 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:51 crc kubenswrapper[4767]: E0127 15:50:51.324982 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.373791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.373852 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.373862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.373878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.373888 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.439621 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:36:31.459946714 +0000 UTC Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.475978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.476018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.476030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.476045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.476058 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.579158 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.579232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.579242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.579264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.579275 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.682760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.682817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.682828 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.682853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.682868 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.786041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.786115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.786132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.786154 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.786230 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.889142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.889235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.889255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.889280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.889299 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.991969 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.992031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.992043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.992062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:51 crc kubenswrapper[4767]: I0127 15:50:51.992077 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:51Z","lastTransitionTime":"2026-01-27T15:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.094594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.094669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.094682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.094740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.094754 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.197742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.197806 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.197822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.197843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.197854 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.300074 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.300140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.300150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.300164 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.300173 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.402808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.402856 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.402866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.402884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.402898 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.440152 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:49:40.675358844 +0000 UTC Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.505627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.505669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.505707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.505729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.505742 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.608925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.608996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.609013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.609038 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.609056 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.711913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.711956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.711966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.711981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.711993 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.814290 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.814332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.814347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.814365 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.814376 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.917399 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.917502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.917522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.917582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:52 crc kubenswrapper[4767]: I0127 15:50:52.917600 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:52Z","lastTransitionTime":"2026-01-27T15:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.020488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.020533 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.020573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.020591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.020600 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.124451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.124500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.124513 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.124537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.124550 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.226649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.226720 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.226740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.226785 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.226814 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.325454 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.325508 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.325463 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.325554 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:53 crc kubenswrapper[4767]: E0127 15:50:53.325710 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:53 crc kubenswrapper[4767]: E0127 15:50:53.325793 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:53 crc kubenswrapper[4767]: E0127 15:50:53.325869 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:53 crc kubenswrapper[4767]: E0127 15:50:53.325942 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.332891 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.332961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.332985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.333014 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.333046 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.435861 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.435900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.435909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.435922 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.435932 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.441056 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:09:45.704719628 +0000 UTC Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.538030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.538067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.538075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.538089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.538098 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.640197 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.640252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.640262 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.640276 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.640286 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.743097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.743140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.743151 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.743168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.743181 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.846123 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.846695 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.846733 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.846751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.846761 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.949509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.949580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.949596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.949616 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:53 crc kubenswrapper[4767]: I0127 15:50:53.949631 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:53Z","lastTransitionTime":"2026-01-27T15:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.053133 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.053178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.053189 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.053231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.053245 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.155923 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.155983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.155996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.156017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.156031 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.258632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.258672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.258682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.258697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.258707 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.361483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.361533 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.361543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.361559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.361571 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.441511 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:27:20.516450988 +0000 UTC Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.463444 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.463485 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.463494 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.463508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.463518 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.565961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.566007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.566019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.566035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.566047 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.668269 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.668303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.668310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.668323 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.668331 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.770847 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.770896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.770911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.770931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.770944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.873576 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.873617 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.873628 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.873645 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.873657 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.976529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.976567 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.976577 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.976591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:54 crc kubenswrapper[4767]: I0127 15:50:54.976602 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:54Z","lastTransitionTime":"2026-01-27T15:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.079134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.079175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.079186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.079216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.079229 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.182076 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.182178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.182190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.182230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.182241 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.285500 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.285541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.285552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.285565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.285573 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.325268 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.325293 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.325359 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.325395 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:55 crc kubenswrapper[4767]: E0127 15:50:55.325479 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.325617 4767 scope.go:117] "RemoveContainer" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" Jan 27 15:50:55 crc kubenswrapper[4767]: E0127 15:50:55.325614 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:55 crc kubenswrapper[4767]: E0127 15:50:55.325692 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:55 crc kubenswrapper[4767]: E0127 15:50:55.325811 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.388760 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.389071 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.389083 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.389100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.389116 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.442075 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:48:18.386971522 +0000 UTC Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.493639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.493667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.493677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.493691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.493701 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.597031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.597062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.597070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.597085 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.597120 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.699644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.699693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.699704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.699719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.699728 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.801409 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.801472 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.801489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.801512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.801531 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.804700 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/2.log" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.807489 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.807914 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.823404 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.837848 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.848875 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.866609 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.880753 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.892828 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.903686 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.905348 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.905428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.905442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.905459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.905470 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:55Z","lastTransitionTime":"2026-01-27T15:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.917064 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.943121 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.962238 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.982495 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:55 crc kubenswrapper[4767]: I0127 15:50:55.991843 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:55Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.001842 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.008139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.008167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.008175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.008191 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.008219 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.013003 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.025684 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.038290 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.052721 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.110226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.110274 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.110283 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.110301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.110312 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.212716 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.212765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.212776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.212794 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.212807 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.315026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.315062 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.315072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.315088 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.315102 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.417596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.417702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.417748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.417776 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.417792 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.434596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.434672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.434696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.434725 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.434747 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.443013 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:00:59.445644161 +0000 UTC Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.457075 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.463033 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.463124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.463147 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.463178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.463231 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.484029 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.489983 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.490056 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.490067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.490089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.490102 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.505986 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.510225 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.510284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.510302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.510325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.510342 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.524540 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.527967 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.528005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.528013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.528030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.528040 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.541503 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.541648 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.543298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.543329 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.543339 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.543354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.543364 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.646073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.646110 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.646121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.646138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.646148 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.747939 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.747984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.747995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.748010 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.748022 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.813111 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/3.log" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.814278 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/2.log" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.817006 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" exitCode=1 Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.817045 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.817080 4767 scope.go:117] "RemoveContainer" containerID="638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.818847 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:50:56 crc kubenswrapper[4767]: E0127 15:50:56.819752 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.831353 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.841525 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.850446 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.850487 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.850497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.850512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.850523 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.853299 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.867576 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.877889 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.888223 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.899757 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.915611 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638305633f048890b914bbde6c3f9be7c6135bb0565024dcbc3d8fa1facc0d80\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:27Z\\\",\\\"message\\\":\\\"f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081841 6411 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 15:50:27.081445 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081914 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081923 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:27.081929 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0127 15:50:27.081935 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:27.081374 6411 obj_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0127 15:50:56.475410 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64\\\\nI0127 15:50:56.475474 6818 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-r296r before timer (time: 2026-01-27 15:50:57.573597709 +0000 UTC m=+1.727339485): skip\\\\nI0127 15:50:56.475445 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:56.475507 6818 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475519 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475522 6818 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:56.475538 6818 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0127 15:50:56.475489 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-man\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.925343 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.937039 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.948148 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.952647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.952688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.952702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.952719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.952732 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:56Z","lastTransitionTime":"2026-01-27T15:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.964814 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.981903 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:56 crc kubenswrapper[4767]: I0127 15:50:56.998258 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:56Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.013393 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.026269 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.040604 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.055740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.055772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.055782 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.055795 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.055805 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.158717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.159004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.159115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.159236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.159330 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.261659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.261925 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.262005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.262070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.262131 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.325391 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.325947 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:57 crc kubenswrapper[4767]: E0127 15:50:57.326048 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.326084 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.326094 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:57 crc kubenswrapper[4767]: E0127 15:50:57.326727 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:57 crc kubenswrapper[4767]: E0127 15:50:57.326902 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:57 crc kubenswrapper[4767]: E0127 15:50:57.327260 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.365955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.366292 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.366418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.366523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.366615 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.443535 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:00:22.013901726 +0000 UTC Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.469701 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.469737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.469748 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.469763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.469774 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.571824 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.571866 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.571878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.571892 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.571903 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.674050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.674092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.674104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.674122 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.674133 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.776501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.776558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.776570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.776588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.776602 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.827455 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/3.log" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.830612 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:50:57 crc kubenswrapper[4767]: E0127 15:50:57.830791 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.845933 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.857258 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.870124 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.879173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.879867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.879880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.879894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.879904 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.886539 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.898961 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.909141 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.918877 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.931920 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.943975 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.955598 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.970062 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.982528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.982564 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.982580 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.982596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.982608 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:57Z","lastTransitionTime":"2026-01-27T15:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.983930 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:57 crc kubenswrapper[4767]: I0127 15:50:57.998861 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:57Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.012552 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.031189 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0127 15:50:56.475410 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64\\\\nI0127 15:50:56.475474 6818 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-r296r before timer (time: 2026-01-27 15:50:57.573597709 +0000 UTC m=+1.727339485): skip\\\\nI0127 15:50:56.475445 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:56.475507 6818 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475519 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475522 6818 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:56.475538 6818 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0127 15:50:56.475489 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-man\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.043807 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.056031 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.084652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.084708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.084717 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.084731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.084742 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.187660 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.187713 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.187726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.187744 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.187757 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.290150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.290222 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.290236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.290258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.290270 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.337693 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.355259 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.378885 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.392959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.393009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.393020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.393039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.393053 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.394135 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.413698 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0127 15:50:56.475410 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64\\\\nI0127 15:50:56.475474 6818 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-r296r before timer (time: 2026-01-27 15:50:57.573597709 +0000 UTC m=+1.727339485): skip\\\\nI0127 15:50:56.475445 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:56.475507 6818 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475519 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475522 6818 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:56.475538 6818 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0127 15:50:56.475489 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-man\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.424709 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.439005 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.448466 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:35:40.816721452 +0000 UTC Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.452179 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.463111 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.477102 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.493585 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.497146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.497185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.497212 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.497231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.497246 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.508704 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.520004 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.531041 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.545341 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.560570 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.573337 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:50:58Z is after 2025-08-24T17:21:41Z" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.599989 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.600033 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.600043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.600059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.600072 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.702261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.702317 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.702333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.702353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.702365 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.804928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.804996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.805008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.805027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.805041 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.907543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.907590 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.907601 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.907618 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:58 crc kubenswrapper[4767]: I0127 15:50:58.907630 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:58Z","lastTransitionTime":"2026-01-27T15:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.010411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.010450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.010459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.010473 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.010485 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.112626 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.112677 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.112688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.112707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.112720 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.215597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.215678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.215697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.215721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.215738 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.318934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.319012 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.319026 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.319043 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.319057 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.325256 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.325385 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.325482 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:50:59 crc kubenswrapper[4767]: E0127 15:50:59.325614 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.325760 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:50:59 crc kubenswrapper[4767]: E0127 15:50:59.325860 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:50:59 crc kubenswrapper[4767]: E0127 15:50:59.326026 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:50:59 crc kubenswrapper[4767]: E0127 15:50:59.326130 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.422373 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.422448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.422471 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.422502 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.422523 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.449328 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:09:47.909510749 +0000 UTC Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.525291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.525325 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.525333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.525347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.525358 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.628518 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.628569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.628579 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.628594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.628603 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.730832 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.730874 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.730886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.730902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.730915 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.832928 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.832993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.833005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.833020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.833030 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.935005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.935041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.935050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.935064 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:50:59 crc kubenswrapper[4767]: I0127 15:50:59.935074 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:50:59Z","lastTransitionTime":"2026-01-27T15:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.038050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.038097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.038132 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.038174 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.038186 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.141302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.141353 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.141372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.141394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.141409 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.244333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.244360 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.244368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.244380 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.244389 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.346509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.346561 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.346572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.346588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.346600 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.448725 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.448787 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.448805 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.448835 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.448855 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.449483 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:58:44.091825689 +0000 UTC Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.551791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.551864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.551883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.551913 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.551933 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.654543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.654596 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.654608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.654627 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.654639 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.758163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.758226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.758237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.758254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.758266 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.860779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.860845 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.860853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.860886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.860897 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.963691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.963751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.963765 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.963790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:00 crc kubenswrapper[4767]: I0127 15:51:00.963807 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:00Z","lastTransitionTime":"2026-01-27T15:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.075002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.075045 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.075057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.075075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.075097 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.177802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.177862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.177903 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.177927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.177944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.280263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.280302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.280315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.280332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.280343 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.324988 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.325031 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.325006 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.324990 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:01 crc kubenswrapper[4767]: E0127 15:51:01.325134 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:01 crc kubenswrapper[4767]: E0127 15:51:01.325234 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:01 crc kubenswrapper[4767]: E0127 15:51:01.325341 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:01 crc kubenswrapper[4767]: E0127 15:51:01.325487 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.383252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.383297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.383309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.383345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.383359 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.449994 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:31:01.150727249 +0000 UTC Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.486788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.486850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.486870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.486895 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.486910 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.589591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.589644 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.589656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.589676 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.589688 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.692568 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.692622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.692633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.692654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.692666 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.797081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.797134 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.797146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.797167 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.797180 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.899861 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.899910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.899920 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.899934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:01 crc kubenswrapper[4767]: I0127 15:51:01.899944 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:01Z","lastTransitionTime":"2026-01-27T15:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.003100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.003131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.003140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.003152 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.003161 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.105934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.105988 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.106000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.106018 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.106029 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.187995 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.188084 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.188199 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.188171599 +0000 UTC m=+148.577189122 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.188258 4767 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.188320 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.188306303 +0000 UTC m=+148.577323826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.208395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.208432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.208442 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.208456 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.208465 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.289075 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.289156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.289480 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289582 4767 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289663 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.289643493 +0000 UTC m=+148.678661056 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289799 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289847 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289863 4767 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289931 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.28991275 +0000 UTC m=+148.678930343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.289799 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.290063 4767 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.290088 4767 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:51:02 crc kubenswrapper[4767]: E0127 15:51:02.290163 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.290139417 +0000 UTC m=+148.679157020 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.310591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.310639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.310651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.310667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.310679 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.413324 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.413362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.413374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.413387 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.413399 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.450821 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:19:52.16541429 +0000 UTC Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.515265 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.515597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.515606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.515622 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.515631 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.618839 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.618907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.618929 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.618958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.618979 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.721837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.721871 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.721880 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.721894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.721905 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.824999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.825082 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.825109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.825140 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.825165 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.928509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.928594 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.928629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.928661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:02 crc kubenswrapper[4767]: I0127 15:51:02.928682 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:02Z","lastTransitionTime":"2026-01-27T15:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.031117 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.031178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.031231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.031254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.031267 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.134558 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.134682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.134699 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.134724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.134744 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.237919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.237980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.238002 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.238031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.238052 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.324565 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.324663 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.324565 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:03 crc kubenswrapper[4767]: E0127 15:51:03.324816 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.324587 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:03 crc kubenswrapper[4767]: E0127 15:51:03.324742 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:03 crc kubenswrapper[4767]: E0127 15:51:03.325063 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:03 crc kubenswrapper[4767]: E0127 15:51:03.325124 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.340846 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.340894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.340915 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.340934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.340946 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.444247 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.444295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.449173 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.449236 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.449252 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.451901 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:59:59.258343216 +0000 UTC Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.551597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.551646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.551656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.551672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.551683 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.653898 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.653934 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.653942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.653957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.653967 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.756884 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.756955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.756977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.757008 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.757032 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.859447 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.859510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.859528 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.859548 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.859563 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.962697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.962763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.962781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.962810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:03 crc kubenswrapper[4767]: I0127 15:51:03.962831 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:03Z","lastTransitionTime":"2026-01-27T15:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.065188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.065251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.065263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.065278 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.065292 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.167414 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.167469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.167484 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.167501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.167512 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.270297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.270345 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.270359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.270376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.270388 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.373523 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.373574 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.373586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.373603 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.373615 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.452907 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:34:25.292257229 +0000 UTC Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.476766 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.476827 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.476851 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.476878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.476900 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.580000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.580055 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.580080 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.580105 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.580121 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.683031 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.683084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.683096 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.683114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.683126 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.785359 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.785419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.785435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.785459 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.785475 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.887246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.887294 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.887321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.887340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.887354 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.990330 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.990381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.990393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.990411 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:04 crc kubenswrapper[4767]: I0127 15:51:04.990422 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:04Z","lastTransitionTime":"2026-01-27T15:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.093004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.093053 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.093063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.093079 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.093089 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.195449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.195504 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.195520 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.195537 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.195562 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.297726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.297779 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.297788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.297804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.297813 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.325067 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.325124 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.325177 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:05 crc kubenswrapper[4767]: E0127 15:51:05.325237 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.325184 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:05 crc kubenswrapper[4767]: E0127 15:51:05.325322 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:05 crc kubenswrapper[4767]: E0127 15:51:05.325574 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:05 crc kubenswrapper[4767]: E0127 15:51:05.325761 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.400547 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.400624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.400636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.400652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.400664 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.453148 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:32:28.776679316 +0000 UTC Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.502610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.502655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.502667 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.502688 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.502698 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.605267 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.605300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.605310 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.605327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.605339 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.707919 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.707958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.707971 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.707991 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.708005 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.810240 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.810286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.810302 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.810319 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.810329 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.912756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.912800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.912812 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.912834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:05 crc kubenswrapper[4767]: I0127 15:51:05.912848 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:05Z","lastTransitionTime":"2026-01-27T15:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.015185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.015259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.015274 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.015297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.015313 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.117715 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.117759 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.117772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.117792 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.117806 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.220638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.220673 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.220682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.220724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.220736 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.323151 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.323227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.323239 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.323255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.323266 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.349386 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.349456 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.425756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.425804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.425819 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.425837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.425849 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.453911 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:13:58.345551016 +0000 UTC Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.528230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.528267 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.528279 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.528299 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.528311 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.623908 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.623966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.623978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.624000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.624018 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.639434 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.644742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.644791 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.644802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.644822 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.644837 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.659160 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.663527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.663565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.663578 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.663595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.663608 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.677552 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.681388 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.681452 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.681465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.681490 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.681504 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.696150 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.699943 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.699986 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.699999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.700023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.700039 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.714249 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:06Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:06 crc kubenswrapper[4767]: E0127 15:51:06.714449 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.716549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.716576 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.716585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.716606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.716768 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.821095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.821175 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.821199 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.821258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.821276 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.923882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.923944 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.923957 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.923977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:06 crc kubenswrapper[4767]: I0127 15:51:06.923992 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:06Z","lastTransitionTime":"2026-01-27T15:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.027870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.027952 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.027992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.028034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.028059 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.131629 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.131724 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.131749 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.131781 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.131803 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.235320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.235384 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.235397 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.235419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.235430 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.325051 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.325068 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.325134 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.325158 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:07 crc kubenswrapper[4767]: E0127 15:51:07.325328 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:07 crc kubenswrapper[4767]: E0127 15:51:07.325428 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:07 crc kubenswrapper[4767]: E0127 15:51:07.325491 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:07 crc kubenswrapper[4767]: E0127 15:51:07.325571 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.337780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.337820 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.337862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.337877 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.337888 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.439468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.439514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.439525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.439539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.439548 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.454931 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:51:42.696624101 +0000 UTC Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.541439 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.541486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.541495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.541509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.541519 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.644279 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.644337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.644375 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.644404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.644427 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.747570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.747655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.747686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.747720 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.747747 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.850696 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.850814 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.850841 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.850870 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.850895 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.955144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.955214 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.955226 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.955245 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:07 crc kubenswrapper[4767]: I0127 15:51:07.955260 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:07Z","lastTransitionTime":"2026-01-27T15:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.060271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.060378 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.060395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.060420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.060439 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.163498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.163552 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.163570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.163592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.163605 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.266911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.266997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.267009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.267030 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.267042 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.326117 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:51:08 crc kubenswrapper[4767]: E0127 15:51:08.326442 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.346894 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e991bd97-3a44-4291-814f-68145fd2ed66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d814e1bbd3556790ea49fa61224968631434d61369aa14a3cbc4f54161ccf4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48bed3f848319c4c0a83edb33a6e88a70259e1abfcd75f44bb4cd5cf84166355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://998b9f77d56b09a7f43564fb2cfd1a2f0c7667ead472a734e1619bb36d063e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b50cd3e07e1be2c4acfe5f7f9b2d7c2081cd707ac79700e58a4a69365be9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765188294d75bfe9dcdf6ee636af3821fc26b00005e03e3d4330b9e097824a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.360122 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.369949 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.370003 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.370020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.370040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.370051 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.372338 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.384269 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.399301 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.412426 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.431779 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.444101 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313f981-d3ed-4106-9b58-bfc29338ac81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e5bddbfbc9603046959d0ee01d0f797d0098ce21700eec3931967e9f471084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.454017 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.455859 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:16:05.701111152 +0000 UTC Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.462981 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.473168 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.473244 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.473258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.473275 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.473287 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.474571 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.486318 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.498343 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.512038 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.526071 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.539490 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.551003 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.567355 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0127 15:50:56.475410 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64\\\\nI0127 15:50:56.475474 6818 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-r296r before timer (time: 2026-01-27 15:50:57.573597709 +0000 UTC m=+1.727339485): skip\\\\nI0127 15:50:56.475445 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:56.475507 6818 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475519 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475522 6818 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:56.475538 6818 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0127 15:50:56.475489 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-man\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.575581 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.575626 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.575638 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.575656 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.575669 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.578716 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:08Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.678224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.678271 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.678282 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.678298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.678310 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.781488 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.781541 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.781559 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.781582 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.781598 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.884540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.884591 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.884608 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.884640 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.884664 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.986886 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.986970 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.986995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.987027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:08 crc kubenswrapper[4767]: I0127 15:51:08.987053 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:08Z","lastTransitionTime":"2026-01-27T15:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.089980 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.090025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.090040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.090059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.090072 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.193592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.194427 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.194475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.194503 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.194521 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.297188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.297254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.297263 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.297332 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.297345 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.325176 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.325283 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.325277 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.325304 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:09 crc kubenswrapper[4767]: E0127 15:51:09.325389 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:09 crc kubenswrapper[4767]: E0127 15:51:09.325487 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:09 crc kubenswrapper[4767]: E0127 15:51:09.325616 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:09 crc kubenswrapper[4767]: E0127 15:51:09.325685 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.400020 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.400093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.400115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.400143 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.400164 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.456267 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:05:52.578006833 +0000 UTC Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.503342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.503395 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.503406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.503428 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.503438 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.606284 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.606346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.606368 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.606393 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.606407 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.709602 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.709661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.709675 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.709697 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.709711 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.812639 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.812690 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.812702 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.812718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.812730 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.914572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.914612 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.914621 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.914634 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:09 crc kubenswrapper[4767]: I0127 15:51:09.914643 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:09Z","lastTransitionTime":"2026-01-27T15:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.017130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.017188 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.017237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.017255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.017268 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.120613 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.120671 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.120685 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.120707 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.120724 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.223742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.223788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.223799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.223818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.223833 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.326106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.326152 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.326163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.326181 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.326194 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.429693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.429742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.429752 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.429771 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.429784 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.457163 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:39:59.17585926 +0000 UTC Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.533743 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.533940 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.533972 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.534057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.534124 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.637570 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.637636 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.637705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.637738 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.637759 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.740148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.740185 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.740212 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.740231 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.740245 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.842362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.842415 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.842432 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.842452 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.842464 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.945232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.945300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.945315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.945342 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:10 crc kubenswrapper[4767]: I0127 15:51:10.945366 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:10Z","lastTransitionTime":"2026-01-27T15:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.048372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.048470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.048486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.048514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.048532 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.151232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.151279 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.151291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.151309 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.151324 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.253465 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.253505 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.253517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.253533 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.253546 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.324555 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.324671 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.324699 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:11 crc kubenswrapper[4767]: E0127 15:51:11.324789 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:11 crc kubenswrapper[4767]: E0127 15:51:11.324893 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:11 crc kubenswrapper[4767]: E0127 15:51:11.324966 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.325375 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:11 crc kubenswrapper[4767]: E0127 15:51:11.325437 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.356404 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.356448 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.356461 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.356476 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.356486 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.457708 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:57:38.601365612 +0000 UTC Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.459041 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.459086 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.459098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.459114 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.459125 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.562948 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.562984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.562993 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.563007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.563016 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.666379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.666453 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.666478 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.666507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.666530 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.769194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.769249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.769262 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.769278 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.769289 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.871655 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.871694 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.871703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.871719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.871730 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.975854 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.975931 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.975955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.975984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:11 crc kubenswrapper[4767]: I0127 15:51:11.976006 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:11Z","lastTransitionTime":"2026-01-27T15:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.078995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.079086 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.079102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.079144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.079168 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.182119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.182186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.182223 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.182248 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.182265 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.284853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.284907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.284917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.284933 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.284945 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.388029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.388077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.388086 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.388104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.388116 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.458621 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:17:37.412194299 +0000 UTC Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.491344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.491396 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.491406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.491423 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.491438 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.593978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.594025 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.594034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.594049 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.594060 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.696063 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.696094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.696104 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.696118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.696130 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.799023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.799070 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.799081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.799097 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.799109 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.900897 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.900942 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.900951 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.900964 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:12 crc kubenswrapper[4767]: I0127 15:51:12.900975 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:12Z","lastTransitionTime":"2026-01-27T15:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.003924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.003982 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.003999 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.004019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.004036 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.106799 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.106850 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.106862 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.106879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.106890 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.208837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.208924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.208941 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.208959 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.208971 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.311486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.311529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.311540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.311556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.311567 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.325233 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.325275 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.325286 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.325308 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:13 crc kubenswrapper[4767]: E0127 15:51:13.325406 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:13 crc kubenswrapper[4767]: E0127 15:51:13.325508 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:13 crc kubenswrapper[4767]: E0127 15:51:13.325629 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:13 crc kubenswrapper[4767]: E0127 15:51:13.326020 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.414004 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.414054 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.414066 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.414084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.414097 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.459805 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:56:34.584946846 +0000 UTC Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.516566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.516623 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.516635 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.516651 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.516663 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.618625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.618662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.618672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.618691 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.618702 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.720995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.721058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.721073 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.721089 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.721101 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.824124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.824172 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.824194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.824232 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.824249 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.927701 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.927750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.927762 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.927783 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:13 crc kubenswrapper[4767]: I0127 15:51:13.927796 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:13Z","lastTransitionTime":"2026-01-27T15:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.030234 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.030295 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.030314 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.030346 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.030362 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.132662 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.132711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.132719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.132736 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.132748 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.235137 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.235184 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.235194 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.235223 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.235236 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.338224 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.338269 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.338278 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.338297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.338306 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.439910 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.439974 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.439985 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.440000 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.440012 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.460270 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:10:41.644810092 +0000 UTC Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.542057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.542106 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.542119 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.542138 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.542150 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.644525 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.644595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.644619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.644647 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.644671 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.747059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.747101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.747109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.747121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.747132 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.849641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.849693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.849709 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.849729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.849743 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.952474 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.952529 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.952543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.952563 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:14 crc kubenswrapper[4767]: I0127 15:51:14.952579 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:14Z","lastTransitionTime":"2026-01-27T15:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.055420 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.055464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.055475 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.055489 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.055503 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.158148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.158221 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.158238 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.158255 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.158267 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.260935 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.260977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.260992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.261007 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.261017 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.324894 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.324930 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.324918 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:15 crc kubenswrapper[4767]: E0127 15:51:15.325052 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.324899 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:15 crc kubenswrapper[4767]: E0127 15:51:15.325318 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:15 crc kubenswrapper[4767]: E0127 15:51:15.325336 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:15 crc kubenswrapper[4767]: E0127 15:51:15.325406 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.364241 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.364313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.364327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.364344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.364367 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.460998 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:47:39.929210352 +0000 UTC Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.467930 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.467966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.467978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.467992 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.468003 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.570166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.570230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.570242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.570261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.570272 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.673116 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.673178 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.673191 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.673251 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.673270 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.775997 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.776056 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.776072 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.776095 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.776107 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.885099 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.885144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.885155 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.885169 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.885476 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.987522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.987565 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.987578 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.987593 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:15 crc kubenswrapper[4767]: I0127 15:51:15.987603 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:15Z","lastTransitionTime":"2026-01-27T15:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.089652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.089686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.089704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.089723 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.089735 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.192599 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.192670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.192681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.192698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.192710 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.294585 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.294626 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.294637 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.294654 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.294665 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.397466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.397516 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.397531 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.397549 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.397561 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.461401 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:30:00.477333359 +0000 UTC Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.501146 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.501190 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.501218 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.501235 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.501247 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.604291 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.604366 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.604384 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.604899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.604992 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.707259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.707297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.707305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.707318 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.707327 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.810301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.810343 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.810354 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.810371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.810383 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.912385 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.912451 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.912470 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.912490 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.912507 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.932589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.932633 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.932642 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.932659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.932672 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: E0127 15:51:16.944533 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.948280 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.948313 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.948321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.948338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.948349 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: E0127 15:51:16.963624 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.967466 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.967509 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.967521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.967540 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.967550 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: E0127 15:51:16.979135 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.983039 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.983075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.983256 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.983277 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:16 crc kubenswrapper[4767]: I0127 15:51:16.983289 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:16Z","lastTransitionTime":"2026-01-27T15:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:16 crc kubenswrapper[4767]: E0127 15:51:16.996757 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:16Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.002315 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.002374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.002389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.002421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.002448 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.018604 4767 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T15:51:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2cd8151d-a43c-49a6-97ea-751da1662943\\\",\\\"systemUUID\\\":\\\"6dcc28eb-dc96-4a60-8422-dfd2f2d7d81e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:17Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.018764 4767 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.021356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.021418 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.021435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.021460 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.021475 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.124756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.124804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.124813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.124837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.124866 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.227059 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.227102 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.227118 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.227135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.227145 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.325235 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.325276 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.325363 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.325381 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.325423 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.325530 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.325612 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.325694 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.329649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.329698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.329710 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.329726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.329740 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.438705 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.438746 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.438757 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.438774 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.438785 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.462238 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:39:56.906395486 +0000 UTC Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.540956 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.541009 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.541019 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.541035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.541046 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.643531 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.643571 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.643584 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.643600 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.643611 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.656117 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.656341 4767 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:51:17 crc kubenswrapper[4767]: E0127 15:51:17.656408 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs podName:03660290-055d-4f50-be45-3d6d9c023b34 nodeName:}" failed. No retries permitted until 2026-01-27 15:52:21.656388882 +0000 UTC m=+164.045406415 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs") pod "network-metrics-daemon-r296r" (UID: "03660290-055d-4f50-be45-3d6d9c023b34") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.746249 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.746292 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.746305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.746320 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.746330 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.848810 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.848865 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.848878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.848894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.848906 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.951653 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.951703 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.951719 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.951737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:17 crc kubenswrapper[4767]: I0127 15:51:17.951748 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:17Z","lastTransitionTime":"2026-01-27T15:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.054252 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.054297 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.054308 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.054327 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.054338 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.156586 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.156630 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.156643 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.156659 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.156671 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.258961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.259053 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.259075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.259100 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.259120 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.339259 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"573e2605-7b80-4fbd-890e-a659eaf47b04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41673239f19de4ae2542b267fba57d9ce5a3033e0693c4ca39ea4b0355b287fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6f807887f23f36471635dee34d16be91853e41cd5aa2d02cbd61abb5af322cb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6682c6831e78e0fc1d226b93a30754d6804de51e3bad59f2aecb63029c603b62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.350618 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3fb7f5-2925-4714-9e7b-44749885b298\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://618735eec5fb8812129be3a3733b7b5162bcece07fa8577f1e868e667e8497ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qczrn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mrkmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.364371 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r296r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03660290-055d-4f50-be45-3d6d9c023b34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqk42\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r296r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.365081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.365131 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.365142 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.365163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.365176 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.378019 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85599359-8ed9-48d0-a13e-3f2d3c2f4915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa73f7eadac5c6ff80c55f80cd63c9a2aca033e9db04b351779738aeea07d638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a30ef0d655a13360eb3001feb2d6d2e511d3063e2903f2fcff4714af7799c38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4765fc3fd0fe4e4940f0e9b2421dbefe5545487182613514fd99b05a9b3cbb2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f768bb7e98e5892e239288a3478996dcbfdaa66c0009223ee65b2c689a49a88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.393773 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bff4254-e814-4da3-bea2-c1167d764153\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T15:49:52Z\\\",\\\"message\\\":\\\"W0127 15:49:41.528488 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 15:49:41.528915 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769528981 cert, and key in /tmp/serving-cert-679483087/serving-signer.crt, /tmp/serving-cert-679483087/serving-signer.key\\\\nI0127 15:49:41.920592 1 observer_polling.go:159] Starting file observer\\\\nW0127 15:49:41.924131 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 15:49:41.924269 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 15:49:41.926521 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-679483087/tls.crt::/tmp/serving-cert-679483087/tls.key\\\\\\\"\\\\nF0127 15:49:52.519926 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.409105 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.423952 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f57b327bd724aff1cb3b48e9e3ed36ffb620d31b880784270f57387fc1ca346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.443926 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96ceb606-f7e2-4d60-a632-a9443e01b99a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:56Z\\\",\\\"message\\\":\\\"go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0127 15:50:56.475410 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64\\\\nI0127 15:50:56.475474 6818 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-r296r before timer (time: 2026-01-27 15:50:57.573597709 +0000 UTC m=+1.727339485): skip\\\\nI0127 15:50:56.475445 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0127 15:50:56.475507 6818 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475519 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0127 15:50:56.475522 6818 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0127 15:50:56.475538 6818 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0127 15:50:56.475489 6818 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-man\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lnqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x97k7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.462914 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f02fb217-0bb2-4720-b223-3e3dcf0cff3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc63d7a0baa85de5b5a49c21dfa58afc0f78980ac5a4d5571a8acc17289df6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a468ea01287f413b7367f5ed9cf2cdf0fb107a4b280f6c11b1e67746b702166f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aab2c62f78e0611a4e2075b10b8345cc7c8327e849bde03949b4a402af1b242f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aac254f9a96f6b21ee9ce6abb421f32719fbe0087fa37c91b0c33e4e7ff5fbac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7980f61442914b9953fc8e3f8d5d66bed321e96d408ce8cd3ee36fed92d4cee9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84057f926d77f860cd1f76bdff24b8f559b19bec3e0f023913a6f72b48792b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4855dc662f45d26fea87310c9590ba32c254c1a4c63142b756b968ebc8f11851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:50:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdxz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.463353 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:00:09.376544719 +0000 UTC Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.468695 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.468740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.468756 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.468780 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.468796 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.479240 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T15:50:48Z\\\",\\\"message\\\":\\\"2026-01-27T15:50:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a\\\\n2026-01-27T15:50:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b471834-528d-4042-a73c-e1b76d19dd8a to /host/opt/cni/bin/\\\\n2026-01-27T15:50:03Z [verbose] multus-daemon started\\\\n2026-01-27T15:50:03Z [verbose] Readiness Indicator file check\\\\n2026-01-27T15:50:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdgcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.493464 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfb98be5-2dff-40fa-9106-243d23891837\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db406da5a075948dbaf16512e9f6265715e1167ebab81fb47de650527de176a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a971061a55ba26b55e2d42a675d4390ca5ffb5b7165e11f136754a0de9b45c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fl2kx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7hl64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.507867 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5313f981-d3ed-4106-9b58-bfc29338ac81\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e5bddbfbc9603046959d0ee01d0f797d0098ce21700eec3931967e9f471084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a76a7282d4a6d2928b7a20e383ca260fa23c152c91a9b0d065c3545d1703a8f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.535094 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e991bd97-3a44-4291-814f-68145fd2ed66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d814e1bbd3556790ea49fa61224968631434d61369aa14a3cbc4f54161ccf4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48bed3f848319c4c0a83edb33a6e88a70259e1abfcd75f44bb4cd5cf84166355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://998b9f77d56b09a7f43564fb2cfd1a2f0c7667ead472a734e1619bb36d063e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b50cd3e07e1be2c4acfe5f7f9b2d7c2081cd707ac79700e58a4a69365be9061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://765188294d75bfe9dcdf6ee636af3821fc26b00005e03e3d4330b9e097824a02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c5a93a9bc5e435a644aca26c468de6d30a428455aa8fc1c3f789916f7e1c1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://915360c6a5e156d4d42f2798ded12a113619420fc200c81f3fa3cefab71a47df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc158e087013235e67466bf746c8bea1ff5674609a9b16b01a90a2a5a39ed334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T15:49:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T15:49:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.550685 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.566329 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://634854c7fb77c1164c86444c7e911a07f9bc49e2f99413b6add0251467769e15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cde7d8b3497ed4448959c94131732b2517dbf867d70b54aab429907ea179e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:49:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.570978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.571017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.571029 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.571044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.571056 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.583188 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.599315 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b925fe363087eca25643c95b6dec334140c6111b18696ccb2b54b388a1c39d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.614548 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-cksm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b53edc9-0d4a-4d33-ba63-43a9dc551cef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf923f26e0d4bb7f618964465fba2ecd88d8589f7c701c67da81e7ec2cb06258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx27l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:49:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-cksm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.626633 4767 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d66w2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad34d879-c8b8-494a-81e7-69d72a3a48fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T15:50:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee49b17cb24ed73d0101868a4e365829ec3760dafb91f30c1eb1842a7af29d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T15:50:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjpsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T15:50:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d66w2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T15:51:18Z is after 2025-08-24T17:21:41Z" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.673745 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.673818 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.673836 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.673859 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.673874 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.776338 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.776381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.776394 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.776412 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.776426 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.878761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.878802 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.878813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.878831 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.878843 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.981242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.981508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.981625 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.981732 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:18 crc kubenswrapper[4767]: I0127 15:51:18.981812 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:18Z","lastTransitionTime":"2026-01-27T15:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.085027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.085098 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.085111 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.085128 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.085139 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.187896 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.187950 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.187962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.187978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.187988 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.291242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.291286 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.291300 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.291318 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.291330 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.326372 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.326521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.326569 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.326521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:19 crc kubenswrapper[4767]: E0127 15:51:19.326586 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.326598 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:19 crc kubenswrapper[4767]: E0127 15:51:19.326677 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:19 crc kubenswrapper[4767]: E0127 15:51:19.326833 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:19 crc kubenswrapper[4767]: E0127 15:51:19.327062 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:19 crc kubenswrapper[4767]: E0127 15:51:19.327136 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.395373 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.395435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.395450 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.395469 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.395486 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.463834 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:25:56.859319576 +0000 UTC Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.498855 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.498962 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.498978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.499005 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.499023 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.601977 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.602023 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.602034 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.602050 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.602062 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.704695 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.704735 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.704747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.704770 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.704783 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.807641 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.807683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.807693 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.807708 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.807718 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.910536 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.910587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.910595 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.910610 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:19 crc kubenswrapper[4767]: I0127 15:51:19.910620 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:19Z","lastTransitionTime":"2026-01-27T15:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.012996 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.013027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.013035 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.013049 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.013058 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.115683 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.115730 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.115742 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.115761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.115773 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.219879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.220228 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.220242 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.220259 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.220272 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.322706 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.322739 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.322747 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.322763 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.322774 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.425815 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.425867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.425879 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.425900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.425912 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.464224 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:49:22.197494686 +0000 UTC Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.529813 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.529873 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.529887 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.529907 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.529920 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.633077 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.633120 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.633129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.633144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.633154 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.735834 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.735883 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.735894 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.735911 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.735925 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.839135 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.839223 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.839264 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.839283 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.839307 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.941882 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.941921 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.941932 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.941947 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:20 crc kubenswrapper[4767]: I0127 15:51:20.941958 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:20Z","lastTransitionTime":"2026-01-27T15:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.044729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.044790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.044800 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.044817 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.044832 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.147013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.147057 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.147068 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.147084 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.147096 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.253121 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.253186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.253216 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.253237 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.253256 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.325116 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.325343 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:21 crc kubenswrapper[4767]: E0127 15:51:21.325435 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.325448 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.325508 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:21 crc kubenswrapper[4767]: E0127 15:51:21.325721 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:21 crc kubenswrapper[4767]: E0127 15:51:21.325845 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:21 crc kubenswrapper[4767]: E0127 15:51:21.325962 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.356501 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.356545 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.356554 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.356569 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.356581 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.459652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.459698 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.459710 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.459726 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.459738 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.465167 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:24:54.796324946 +0000 UTC Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.561670 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.561725 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.561740 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.561761 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.561773 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.665514 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.665545 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.665556 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.665589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.665598 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.767909 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.767945 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.767955 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.767968 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.767978 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.870853 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.870889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.870904 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.870924 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.870938 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.973067 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.973093 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.973101 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.973115 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:21 crc kubenswrapper[4767]: I0127 15:51:21.973124 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:21Z","lastTransitionTime":"2026-01-27T15:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.076973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.077013 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.077027 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.077048 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.077060 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.179976 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.180017 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.180028 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.180044 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.180055 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.283321 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.283364 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.283374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.283390 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.283400 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.385843 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.385889 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.385900 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.385917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.385936 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.465408 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:03:29.729950219 +0000 UTC Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.488436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.488486 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.488499 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.488517 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.488530 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.590918 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.590966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.590978 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.590995 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.591008 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.693163 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.693246 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.693261 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.693279 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.693291 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.795828 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.795899 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.795927 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.795961 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.795980 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.898808 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.898864 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.898878 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.898897 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:22 crc kubenswrapper[4767]: I0127 15:51:22.898909 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:22Z","lastTransitionTime":"2026-01-27T15:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.001281 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.001328 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.001340 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.001357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.001369 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.104186 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.104254 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.104266 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.104283 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.104295 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.206612 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.206679 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.206692 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.206711 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.206723 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.310144 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.310187 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.310210 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.310229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.310239 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.325000 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.325068 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.325084 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.325116 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:23 crc kubenswrapper[4767]: E0127 15:51:23.325229 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:23 crc kubenswrapper[4767]: E0127 15:51:23.325482 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:23 crc kubenswrapper[4767]: E0127 15:51:23.325517 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:23 crc kubenswrapper[4767]: E0127 15:51:23.325592 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.413305 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.413344 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.413357 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.413374 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.413387 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.466100 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:04:57.374901489 +0000 UTC Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.517296 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.517587 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.517731 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.517858 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.517935 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.620464 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.620498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.620507 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.620522 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.620533 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.722859 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.723266 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.723372 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.723483 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.723564 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.825557 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.825597 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.825606 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.825619 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.825630 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.928421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.928521 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.928543 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.928572 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:23 crc kubenswrapper[4767]: I0127 15:51:23.928595 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:23Z","lastTransitionTime":"2026-01-27T15:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.030788 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.031124 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.031229 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.031333 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.031405 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.135440 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.135497 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.135510 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.135539 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.135552 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.238289 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.238371 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.238384 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.238421 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.238432 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.342193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.342350 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.342362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.342379 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.342390 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.444678 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.444718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.444729 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.444751 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.444764 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.467076 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:13:30.79317248 +0000 UTC Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.546736 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.546806 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.546821 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.546837 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.546870 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.650589 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.650649 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.650661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.650682 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.650696 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.752790 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.752849 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.752867 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.752892 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.752910 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.855605 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.855661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.855672 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.855689 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.856096 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.957984 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.958015 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.958040 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.958055 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:24 crc kubenswrapper[4767]: I0127 15:51:24.958064 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:24Z","lastTransitionTime":"2026-01-27T15:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.060632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.060661 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.060669 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.060681 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.060691 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.163303 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.163347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.163358 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.163381 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.163399 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.266258 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.266301 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.266311 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.266376 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.266390 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.324804 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.324829 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.324850 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.324818 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:25 crc kubenswrapper[4767]: E0127 15:51:25.324941 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:25 crc kubenswrapper[4767]: E0127 15:51:25.325024 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:25 crc kubenswrapper[4767]: E0127 15:51:25.325141 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:25 crc kubenswrapper[4767]: E0127 15:51:25.325267 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.368721 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.368772 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.368784 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.368801 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.368816 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.467602 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:03:15.747124053 +0000 UTC Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.471686 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.471750 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.471778 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.471798 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.471811 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.573804 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.573845 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.573857 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.573872 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.573884 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.677436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.677495 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.677508 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.677527 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.677895 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.780389 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.780440 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.780449 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.780468 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.780478 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.883148 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.883195 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.883230 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.883253 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.883266 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.986081 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.986139 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.986150 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.986166 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:25 crc kubenswrapper[4767]: I0127 15:51:25.986179 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:25Z","lastTransitionTime":"2026-01-27T15:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.089435 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.089498 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.089512 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.089531 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.089546 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.191830 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.191875 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.191885 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.191902 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.191914 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.294917 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.294958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.294966 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.294981 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.294991 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.397094 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.397141 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.397153 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.397170 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.397184 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.467966 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:13:02.499354377 +0000 UTC Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.499958 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.500058 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.500075 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.500092 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.500108 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.602356 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.602398 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.602408 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.602424 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.602435 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.704481 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.704542 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.704551 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.704566 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.704577 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.807130 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.807193 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.807227 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.807243 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.807255 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.909652 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.909704 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.909718 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.909737 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:26 crc kubenswrapper[4767]: I0127 15:51:26.909753 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:26Z","lastTransitionTime":"2026-01-27T15:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.012526 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.012573 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.012588 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.012609 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.012626 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:27Z","lastTransitionTime":"2026-01-27T15:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.114973 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.115109 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.115129 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.115145 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.115158 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:27Z","lastTransitionTime":"2026-01-27T15:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.217298 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.217337 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.217347 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.217362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.217371 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:27Z","lastTransitionTime":"2026-01-27T15:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.297362 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.297406 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.297419 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.297436 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.297447 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:27Z","lastTransitionTime":"2026-01-27T15:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.325076 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:27 crc kubenswrapper[4767]: E0127 15:51:27.325269 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.325076 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:27 crc kubenswrapper[4767]: E0127 15:51:27.325356 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.325077 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.325103 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:27 crc kubenswrapper[4767]: E0127 15:51:27.325533 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:27 crc kubenswrapper[4767]: E0127 15:51:27.325419 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.349592 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.349624 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.349632 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.349646 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.349655 4767 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T15:51:27Z","lastTransitionTime":"2026-01-27T15:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.374328 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg"] Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.374736 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.377321 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.377450 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.377582 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.377762 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.439125 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xgf2q" podStartSLOduration=89.439107251 podStartE2EDuration="1m29.439107251s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.439098071 +0000 UTC m=+109.828115594" watchObservedRunningTime="2026-01-27 15:51:27.439107251 +0000 UTC m=+109.828124774" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.454564 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a6d4ef-678c-4952-ba33-1a28f4c739e3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.454616 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.454639 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a6d4ef-678c-4952-ba33-1a28f4c739e3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.454709 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a6d4ef-678c-4952-ba33-1a28f4c739e3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.454743 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.455240 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zfxc7" podStartSLOduration=89.455196141 podStartE2EDuration="1m29.455196141s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.455053197 +0000 UTC m=+109.844070720" watchObservedRunningTime="2026-01-27 15:51:27.455196141 +0000 UTC m=+109.844213674" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.466112 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7hl64" podStartSLOduration=88.466092299 podStartE2EDuration="1m28.466092299s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.465864773 +0000 UTC m=+109.854882306" watchObservedRunningTime="2026-01-27 15:51:27.466092299 +0000 UTC m=+109.855109822" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.468829 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:35:00.845402873 +0000 UTC Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.468929 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.477354 4767 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.504782 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.504759778 podStartE2EDuration="21.504759778s" podCreationTimestamp="2026-01-27 15:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.479619674 +0000 UTC m=+109.868637187" watchObservedRunningTime="2026-01-27 15:51:27.504759778 +0000 UTC m=+109.893777301" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.505102 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.505096628 podStartE2EDuration="21.505096628s" podCreationTimestamp="2026-01-27 15:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.50379891 +0000 UTC m=+109.892816443" watchObservedRunningTime="2026-01-27 15:51:27.505096628 +0000 UTC m=+109.894114151" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.555951 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a6d4ef-678c-4952-ba33-1a28f4c739e3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556010 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556034 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a6d4ef-678c-4952-ba33-1a28f4c739e3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556068 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a6d4ef-678c-4952-ba33-1a28f4c739e3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556098 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556105 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.556136 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/31a6d4ef-678c-4952-ba33-1a28f4c739e3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.557042 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31a6d4ef-678c-4952-ba33-1a28f4c739e3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.559487 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-cksm8" podStartSLOduration=89.559475196 podStartE2EDuration="1m29.559475196s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.559074945 +0000 UTC m=+109.948092468" watchObservedRunningTime="2026-01-27 15:51:27.559475196 +0000 UTC m=+109.948492719" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.568037 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31a6d4ef-678c-4952-ba33-1a28f4c739e3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.572102 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d66w2" podStartSLOduration=89.572081964 podStartE2EDuration="1m29.572081964s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.57157143 +0000 UTC m=+109.960588963" watchObservedRunningTime="2026-01-27 15:51:27.572081964 +0000 UTC m=+109.961099487" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.587665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31a6d4ef-678c-4952-ba33-1a28f4c739e3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-47hrg\" (UID: \"31a6d4ef-678c-4952-ba33-1a28f4c739e3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.596556 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.596535219 podStartE2EDuration="1m28.596535219s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.596098546 +0000 UTC m=+109.985116079" watchObservedRunningTime="2026-01-27 15:51:27.596535219 +0000 UTC m=+109.985552742" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.618688 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podStartSLOduration=89.618665915 podStartE2EDuration="1m29.618665915s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.607376095 +0000 UTC m=+109.996393628" watchObservedRunningTime="2026-01-27 15:51:27.618665915 +0000 UTC m=+110.007683438" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.654392 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.654367567 podStartE2EDuration="58.654367567s" podCreationTimestamp="2026-01-27 15:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.653593675 +0000 UTC m=+110.042611248" watchObservedRunningTime="2026-01-27 15:51:27.654367567 +0000 UTC m=+110.043385100" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.676181 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.676160734 podStartE2EDuration="1m29.676160734s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:27.674265248 +0000 UTC m=+110.063282791" watchObservedRunningTime="2026-01-27 15:51:27.676160734 +0000 UTC m=+110.065178277" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.690701 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" Jan 27 15:51:27 crc kubenswrapper[4767]: I0127 15:51:27.924316 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" event={"ID":"31a6d4ef-678c-4952-ba33-1a28f4c739e3","Type":"ContainerStarted","Data":"585a5a40343c9a082da3980d9b5ea0385c7850c4aa0880dafac3df85138f7dc3"} Jan 27 15:51:28 crc kubenswrapper[4767]: I0127 15:51:28.930307 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" event={"ID":"31a6d4ef-678c-4952-ba33-1a28f4c739e3","Type":"ContainerStarted","Data":"efcd97cff58a860c4b9eea04e62899ad113b853dba685ebfb23c6686a8842fb7"} Jan 27 15:51:28 crc kubenswrapper[4767]: I0127 15:51:28.944395 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-47hrg" podStartSLOduration=90.944375779 podStartE2EDuration="1m30.944375779s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:28.943845754 +0000 UTC m=+111.332863327" watchObservedRunningTime="2026-01-27 15:51:28.944375779 +0000 UTC m=+111.333393302" Jan 27 15:51:29 crc kubenswrapper[4767]: I0127 15:51:29.324554 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:29 crc kubenswrapper[4767]: E0127 15:51:29.325024 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:29 crc kubenswrapper[4767]: I0127 15:51:29.324670 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:29 crc kubenswrapper[4767]: E0127 15:51:29.325259 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:29 crc kubenswrapper[4767]: I0127 15:51:29.324750 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:29 crc kubenswrapper[4767]: I0127 15:51:29.324715 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:29 crc kubenswrapper[4767]: E0127 15:51:29.325387 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:29 crc kubenswrapper[4767]: E0127 15:51:29.325497 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:30 crc kubenswrapper[4767]: I0127 15:51:30.326360 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:51:30 crc kubenswrapper[4767]: E0127 15:51:30.326567 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x97k7_openshift-ovn-kubernetes(96ceb606-f7e2-4d60-a632-a9443e01b99a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" Jan 27 15:51:31 crc kubenswrapper[4767]: I0127 15:51:31.325529 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:31 crc kubenswrapper[4767]: I0127 15:51:31.325608 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:31 crc kubenswrapper[4767]: I0127 15:51:31.325564 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:31 crc kubenswrapper[4767]: I0127 15:51:31.325541 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:31 crc kubenswrapper[4767]: E0127 15:51:31.325733 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:31 crc kubenswrapper[4767]: E0127 15:51:31.325860 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:31 crc kubenswrapper[4767]: E0127 15:51:31.325964 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:31 crc kubenswrapper[4767]: E0127 15:51:31.326056 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:33 crc kubenswrapper[4767]: I0127 15:51:33.324816 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:33 crc kubenswrapper[4767]: I0127 15:51:33.324906 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:33 crc kubenswrapper[4767]: I0127 15:51:33.324975 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:33 crc kubenswrapper[4767]: I0127 15:51:33.324967 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:33 crc kubenswrapper[4767]: E0127 15:51:33.325119 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:33 crc kubenswrapper[4767]: E0127 15:51:33.325225 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:33 crc kubenswrapper[4767]: E0127 15:51:33.325365 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:33 crc kubenswrapper[4767]: E0127 15:51:33.325481 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.948545 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/1.log" Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.949621 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/0.log" Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.949658 4767 generic.go:334] "Generic (PLEG): container finished" podID="cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78" containerID="3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f" exitCode=1 Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.949689 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerDied","Data":"3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f"} Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.949725 4767 scope.go:117] "RemoveContainer" containerID="3cdd24f08176a4b13077505ab204a50ed8b7115b7f864c433edff0ea363f3b5d" Jan 27 15:51:34 crc kubenswrapper[4767]: I0127 15:51:34.950101 4767 scope.go:117] "RemoveContainer" containerID="3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f" Jan 27 15:51:34 crc kubenswrapper[4767]: E0127 15:51:34.950296 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zfxc7_openshift-multus(cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78)\"" pod="openshift-multus/multus-zfxc7" podUID="cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78" Jan 27 15:51:35 crc kubenswrapper[4767]: I0127 15:51:35.325385 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:35 crc kubenswrapper[4767]: I0127 15:51:35.325448 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:35 crc kubenswrapper[4767]: I0127 15:51:35.325483 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:35 crc kubenswrapper[4767]: I0127 15:51:35.325543 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:35 crc kubenswrapper[4767]: E0127 15:51:35.325623 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:35 crc kubenswrapper[4767]: E0127 15:51:35.325783 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:35 crc kubenswrapper[4767]: E0127 15:51:35.325852 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:35 crc kubenswrapper[4767]: E0127 15:51:35.325989 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:35 crc kubenswrapper[4767]: I0127 15:51:35.954255 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/1.log" Jan 27 15:51:37 crc kubenswrapper[4767]: I0127 15:51:37.324733 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:37 crc kubenswrapper[4767]: E0127 15:51:37.325784 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:37 crc kubenswrapper[4767]: I0127 15:51:37.324880 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:37 crc kubenswrapper[4767]: E0127 15:51:37.326055 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:37 crc kubenswrapper[4767]: I0127 15:51:37.324827 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:37 crc kubenswrapper[4767]: I0127 15:51:37.324909 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:37 crc kubenswrapper[4767]: E0127 15:51:37.326360 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:37 crc kubenswrapper[4767]: E0127 15:51:37.326476 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:38 crc kubenswrapper[4767]: E0127 15:51:38.287400 4767 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 15:51:38 crc kubenswrapper[4767]: E0127 15:51:38.503725 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:51:39 crc kubenswrapper[4767]: I0127 15:51:39.325022 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:39 crc kubenswrapper[4767]: I0127 15:51:39.325250 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:39 crc kubenswrapper[4767]: E0127 15:51:39.325326 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:39 crc kubenswrapper[4767]: I0127 15:51:39.325043 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:39 crc kubenswrapper[4767]: I0127 15:51:39.325071 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:39 crc kubenswrapper[4767]: E0127 15:51:39.325460 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:39 crc kubenswrapper[4767]: E0127 15:51:39.325609 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:39 crc kubenswrapper[4767]: E0127 15:51:39.325777 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:41 crc kubenswrapper[4767]: I0127 15:51:41.325497 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:41 crc kubenswrapper[4767]: I0127 15:51:41.325497 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:41 crc kubenswrapper[4767]: I0127 15:51:41.325526 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:41 crc kubenswrapper[4767]: E0127 15:51:41.326560 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:41 crc kubenswrapper[4767]: I0127 15:51:41.325586 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:41 crc kubenswrapper[4767]: E0127 15:51:41.326602 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:41 crc kubenswrapper[4767]: E0127 15:51:41.326350 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:41 crc kubenswrapper[4767]: E0127 15:51:41.326726 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:42 crc kubenswrapper[4767]: I0127 15:51:42.326012 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 15:51:42 crc kubenswrapper[4767]: I0127 15:51:42.989863 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/3.log" Jan 27 15:51:42 crc kubenswrapper[4767]: I0127 15:51:42.992404 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerStarted","Data":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 15:51:42 crc kubenswrapper[4767]: I0127 15:51:42.992910 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.017895 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podStartSLOduration=105.017878113 podStartE2EDuration="1m45.017878113s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:51:43.017297146 +0000 UTC m=+125.406314669" watchObservedRunningTime="2026-01-27 15:51:43.017878113 +0000 UTC m=+125.406895636" Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.177506 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r296r"] Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.177641 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:43 crc kubenswrapper[4767]: E0127 15:51:43.177752 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.324616 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.324616 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:43 crc kubenswrapper[4767]: E0127 15:51:43.324741 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:43 crc kubenswrapper[4767]: E0127 15:51:43.324800 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:43 crc kubenswrapper[4767]: I0127 15:51:43.324636 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:43 crc kubenswrapper[4767]: E0127 15:51:43.324860 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:43 crc kubenswrapper[4767]: E0127 15:51:43.504954 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:51:45 crc kubenswrapper[4767]: I0127 15:51:45.325087 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:45 crc kubenswrapper[4767]: I0127 15:51:45.325174 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:45 crc kubenswrapper[4767]: I0127 15:51:45.325225 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:45 crc kubenswrapper[4767]: E0127 15:51:45.325251 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:45 crc kubenswrapper[4767]: I0127 15:51:45.325362 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:45 crc kubenswrapper[4767]: E0127 15:51:45.325408 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:45 crc kubenswrapper[4767]: E0127 15:51:45.325550 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:45 crc kubenswrapper[4767]: E0127 15:51:45.325678 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:48 crc kubenswrapper[4767]: I0127 15:51:48.454574 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:48 crc kubenswrapper[4767]: I0127 15:51:48.454607 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:48 crc kubenswrapper[4767]: I0127 15:51:48.454641 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:48 crc kubenswrapper[4767]: E0127 15:51:48.455170 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:48 crc kubenswrapper[4767]: I0127 15:51:48.454673 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:48 crc kubenswrapper[4767]: I0127 15:51:48.455361 4767 scope.go:117] "RemoveContainer" containerID="3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f" Jan 27 15:51:48 crc kubenswrapper[4767]: E0127 15:51:48.455452 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:48 crc kubenswrapper[4767]: E0127 15:51:48.455602 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:48 crc kubenswrapper[4767]: E0127 15:51:48.455676 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:48 crc kubenswrapper[4767]: E0127 15:51:48.506891 4767 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 15:51:49 crc kubenswrapper[4767]: I0127 15:51:49.012625 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/1.log" Jan 27 15:51:49 crc kubenswrapper[4767]: I0127 15:51:49.012681 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerStarted","Data":"e7b2f4a8fda18721846ff4de34a827a6a4b72c348d58accb69f75befc4f647c5"} Jan 27 15:51:50 crc kubenswrapper[4767]: I0127 15:51:50.324573 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:50 crc kubenswrapper[4767]: I0127 15:51:50.324573 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:50 crc kubenswrapper[4767]: E0127 15:51:50.325700 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:50 crc kubenswrapper[4767]: I0127 15:51:50.324613 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:50 crc kubenswrapper[4767]: E0127 15:51:50.326274 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:50 crc kubenswrapper[4767]: I0127 15:51:50.324614 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:50 crc kubenswrapper[4767]: E0127 15:51:50.326893 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:50 crc kubenswrapper[4767]: E0127 15:51:50.327300 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:52 crc kubenswrapper[4767]: I0127 15:51:52.325474 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:52 crc kubenswrapper[4767]: I0127 15:51:52.325573 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:52 crc kubenswrapper[4767]: I0127 15:51:52.325474 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:52 crc kubenswrapper[4767]: E0127 15:51:52.325646 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 15:51:52 crc kubenswrapper[4767]: E0127 15:51:52.325751 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 15:51:52 crc kubenswrapper[4767]: I0127 15:51:52.325812 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:52 crc kubenswrapper[4767]: E0127 15:51:52.325868 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 15:51:52 crc kubenswrapper[4767]: E0127 15:51:52.325941 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r296r" podUID="03660290-055d-4f50-be45-3d6d9c023b34" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.324706 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.324873 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.324938 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.325693 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.327013 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.327464 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.328363 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.328486 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.328660 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:51:54 crc kubenswrapper[4767]: I0127 15:51:54.328776 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.693982 4767 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.747260 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.756503 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.756663 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.757422 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-69lb2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.757742 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.757953 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7nhv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.758189 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.758240 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.758222 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.762341 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.762568 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.764402 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.765996 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4blj6"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.772092 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.772308 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ksqxd"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.772551 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.772760 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.772969 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773186 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773320 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773498 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773557 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773579 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773774 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773801 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773826 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773803 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773869 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773937 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773994 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774001 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774039 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774110 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774146 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774152 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774229 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774268 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774379 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774235 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773780 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.773505 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-64xhv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774713 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.774739 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775016 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775060 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775177 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775360 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775643 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775654 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775725 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.775945 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bk226"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.776301 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4n6ch"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.776363 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.776473 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.776370 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.776672 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.784970 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786486 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786664 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786693 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786880 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786970 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.786978 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.787044 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.787106 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.787297 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.788609 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.788818 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.789007 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.789176 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.789436 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.789808 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.790512 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.790645 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.790775 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.790874 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.790896 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.791005 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.792181 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.792443 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.793251 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.793491 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.793779 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.793937 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.794077 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.794275 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.794440 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.794619 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.794794 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.796215 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.796512 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.796654 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.796822 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.796979 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.797141 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.797304 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.797495 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.797660 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.797856 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.798006 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.801421 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.801900 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.802266 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.808936 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.808957 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.809403 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810073 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810137 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810237 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810323 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810378 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810460 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810593 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810659 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.817320 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.819572 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fctcl"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.820228 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.820468 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.821814 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.821987 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.827273 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.827535 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.827586 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.827690 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.827881 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.828041 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.828090 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.828270 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.828944 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810600 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.810634 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.847834 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.847868 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.848014 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.848056 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.848217 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.848306 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.849364 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.849531 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.851360 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.851704 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.852274 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.853799 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.858486 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.858757 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.859413 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.861159 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.862821 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.863754 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.864084 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.864519 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.864805 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.865302 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.867327 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.871109 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.871596 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.871694 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tqzlw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.871982 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.872002 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gfdql"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.872085 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.873042 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.875218 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lcrxj"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.875713 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.876151 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.876530 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.876568 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.876782 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.878156 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.879153 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.879800 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.885920 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.893542 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.893962 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.894102 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.894106 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.894710 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.895546 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.897889 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.897919 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cml7v"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.898165 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.899103 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.905349 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7nhv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.905891 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.907349 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4blj6"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.907377 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.907393 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vxtlv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.909590 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-px962"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.909843 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912777 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ksqxd"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912805 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912827 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912838 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912848 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bk226"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912859 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912869 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912879 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912895 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fctcl"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.912977 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-px962" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.915341 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.917053 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.919811 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.922296 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-69lb2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.925083 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.927651 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.928879 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-64xhv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.931002 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gfdql"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.931027 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.932903 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.932929 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.934703 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tqzlw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.935539 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.936951 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.937811 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.938952 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.941680 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.944421 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.944537 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lcrxj"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.946615 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.947733 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-px962"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.948859 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5pb8t"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.950752 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.951249 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.952945 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.953945 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955255 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-client\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955328 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-serving-cert\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955401 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955515 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-service-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955597 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-config\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.955668 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96fh\" (UniqueName: \"kubernetes.io/projected/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-kube-api-access-b96fh\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.958413 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxtlv"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.960452 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.961150 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.962347 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cml7v"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.963277 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5pb8t"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.963964 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8tbrf"] Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.964589 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.978947 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:51:58 crc kubenswrapper[4767]: I0127 15:51:58.998942 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.026843 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.038586 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056221 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056269 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-service-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056306 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-config\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056323 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96fh\" (UniqueName: \"kubernetes.io/projected/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-kube-api-access-b96fh\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056342 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-client\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056358 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-serving-cert\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.056945 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.057395 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-config\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.057395 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-service-ca\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.059448 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.063769 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-serving-cert\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.063810 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-etcd-client\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.086057 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.099538 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.144173 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.158536 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.199896 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.218628 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.239456 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.275667 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.278569 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.299417 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.325628 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.338181 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.359731 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.379056 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.399114 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.419094 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.438513 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.459154 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.478836 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.499005 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.518606 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.539656 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.559238 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.579535 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.599727 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.618618 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.638805 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.659565 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.679697 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.714028 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.723705 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.739184 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.758748 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.779712 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.798648 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.819709 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.839856 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.865565 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.877959 4767 request.go:700] Waited for 1.004871425s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&limit=500&resourceVersion=0 Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.879431 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.899379 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.918723 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.939845 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.959554 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.978733 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:51:59 crc kubenswrapper[4767]: I0127 15:51:59.999347 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.019070 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.039070 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.059162 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.078835 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.099352 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.118987 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.139710 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.165032 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.178629 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.199310 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.219240 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.238902 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.259483 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.279480 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.298891 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.318429 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.339636 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.362173 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.379090 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.399147 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.419093 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.439183 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.459759 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.479379 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.502973 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.519823 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.539764 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.568137 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.578950 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.599162 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.620995 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.639278 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.659658 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.678752 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.701282 4767 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.719643 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.739977 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.759622 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.779750 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.799241 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.833476 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96fh\" (UniqueName: \"kubernetes.io/projected/244d70a9-5aaf-495d-82bc-fcfaa9a5a984-kube-api-access-b96fh\") pod \"etcd-operator-b45778765-bk226\" (UID: \"244d70a9-5aaf-495d-82bc-fcfaa9a5a984\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.883569 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkm4f\" (UniqueName: \"kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885748 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885820 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstw6\" (UniqueName: \"kubernetes.io/projected/4d7e5c51-63bd-46b6-adef-459b93b18142-kube-api-access-nstw6\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885855 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885879 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9fa9e7-f243-4240-b739-babed8be646f-serving-cert\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885907 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6bb2ba9-5a6a-438b-960e-05170e0928a8-machine-approver-tls\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885933 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885972 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9db\" (UniqueName: \"kubernetes.io/projected/c118259f-65cb-437d-abda-b69562018d38-kube-api-access-cp9db\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.885995 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886019 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-encryption-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886039 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886055 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9602d005-3eaf-4e35-a19b-a406036cc295-metrics-tls\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886089 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-serving-cert\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886145 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3587430f-8bc8-4625-b262-e1d6f1c8454b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886164 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-metrics-certs\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886188 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886241 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886267 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzf7f\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-kube-api-access-vzf7f\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886296 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-stats-auth\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886326 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886352 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prmkd\" (UniqueName: \"kubernetes.io/projected/3587430f-8bc8-4625-b262-e1d6f1c8454b-kube-api-access-prmkd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886377 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10023d91-2be9-4ad9-a801-ef782f263aca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886400 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjj99\" (UniqueName: \"kubernetes.io/projected/56755333-86a4-4a45-b49a-c518575ad5f0-kube-api-access-wjj99\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886444 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfcc\" (UniqueName: \"kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886490 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886520 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-config\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886577 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-serving-cert\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886651 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnq8k\" (UniqueName: \"kubernetes.io/projected/29ab3a2b-59d9-4e16-915f-f76e1d215929-kube-api-access-bnq8k\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886682 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886717 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-default-certificate\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886747 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c118259f-65cb-437d-abda-b69562018d38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886786 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886849 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnfs\" (UniqueName: \"kubernetes.io/projected/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-kube-api-access-wtnfs\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886877 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886895 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwwg\" (UniqueName: \"kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886933 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886974 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-proxy-tls\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.886995 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65lq\" (UniqueName: \"kubernetes.io/projected/a6bb2ba9-5a6a-438b-960e-05170e0928a8-kube-api-access-w65lq\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887013 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887036 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-client\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887055 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887075 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q659\" (UniqueName: \"kubernetes.io/projected/bb9fa9e7-f243-4240-b739-babed8be646f-kube-api-access-6q659\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-auth-proxy-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887117 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j28q\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: E0127 15:52:00.887160 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.38714039 +0000 UTC m=+143.776157913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887188 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887255 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46691bb3-2fdb-402e-a030-4855bfd6684a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887305 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887334 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/401b07cc-e3c3-4d71-9c55-c30f78a0335c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887381 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887417 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887445 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-images\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887480 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887529 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-trusted-ca\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887553 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401b07cc-e3c3-4d71-9c55-c30f78a0335c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887587 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d7e5c51-63bd-46b6-adef-459b93b18142-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887617 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3587430f-8bc8-4625-b262-e1d6f1c8454b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887646 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333657b1-ebc6-4900-93eb-7762fd0eeaac-service-ca-bundle\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887668 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56755333-86a4-4a45-b49a-c518575ad5f0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887695 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887731 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-node-pullsecrets\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887759 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7e5c51-63bd-46b6-adef-459b93b18142-serving-cert\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887784 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887812 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887838 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpsn2\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-kube-api-access-kpsn2\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887861 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-config\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887909 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsj5x\" (UniqueName: \"kubernetes.io/projected/25e39933-042b-46a8-9e96-19acb0944e08-kube-api-access-vsj5x\") pod \"downloads-7954f5f757-ksqxd\" (UID: \"25e39933-042b-46a8-9e96-19acb0944e08\") " pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887936 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887960 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.887980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpzk\" (UniqueName: \"kubernetes.io/projected/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-kube-api-access-vgpzk\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888013 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888034 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46691bb3-2fdb-402e-a030-4855bfd6684a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888054 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888074 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-image-import-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888098 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrct\" (UniqueName: \"kubernetes.io/projected/c2a37542-d13b-431e-a375-69e3fc2e90eb-kube-api-access-dxrct\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888121 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qn2r\" (UniqueName: \"kubernetes.io/projected/333657b1-ebc6-4900-93eb-7762fd0eeaac-kube-api-access-5qn2r\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888151 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10023d91-2be9-4ad9-a801-ef782f263aca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888178 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit-dir\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888229 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54kkg\" (UniqueName: \"kubernetes.io/projected/401b07cc-e3c3-4d71-9c55-c30f78a0335c-kube-api-access-54kkg\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9602d005-3eaf-4e35-a19b-a406036cc295-trusted-ca\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888276 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ab3a2b-59d9-4e16-915f-f76e1d215929-metrics-tls\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888317 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-config\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888350 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888377 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.888414 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46691bb3-2fdb-402e-a030-4855bfd6684a-config\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989149 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989357 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-dir\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989391 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnfs\" (UniqueName: \"kubernetes.io/projected/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-kube-api-access-wtnfs\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989436 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989457 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989475 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-encryption-config\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: E0127 15:52:00.989545 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.4895129 +0000 UTC m=+143.878530423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989627 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b6418788-50b4-4982-bde2-dc7acd6728ed-tmpfs\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989715 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-proxy-tls\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989746 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65lq\" (UniqueName: \"kubernetes.io/projected/a6bb2ba9-5a6a-438b-960e-05170e0928a8-kube-api-access-w65lq\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989778 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-certs\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989796 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhp2\" (UniqueName: \"kubernetes.io/projected/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-kube-api-access-7jhp2\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j28q\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989829 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-client\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989847 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989867 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46691bb3-2fdb-402e-a030-4855bfd6684a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989882 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989897 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989913 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-images\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989928 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-trusted-ca\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989944 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401b07cc-e3c3-4d71-9c55-c30f78a0335c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989969 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3587430f-8bc8-4625-b262-e1d6f1c8454b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.989985 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48f59\" (UniqueName: \"kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990001 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990019 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpsn2\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-kube-api-access-kpsn2\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990035 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7e5c51-63bd-46b6-adef-459b93b18142-serving-cert\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990051 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990056 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990066 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwbk\" (UniqueName: \"kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990086 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990103 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtcks\" (UniqueName: \"kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990120 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10023d91-2be9-4ad9-a801-ef782f263aca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990136 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46691bb3-2fdb-402e-a030-4855bfd6684a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990167 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b629b6-588e-44f8-9f64-613ba63f3313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990230 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ab3a2b-59d9-4e16-915f-f76e1d215929-metrics-tls\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990264 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxkn\" (UniqueName: \"kubernetes.io/projected/0ea03516-b574-4e25-8f8f-b45c358b5295-kube-api-access-qsxkn\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990289 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-config\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990340 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990360 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990376 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46691bb3-2fdb-402e-a030-4855bfd6684a-config\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990382 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990392 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990416 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkm4f\" (UniqueName: \"kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990436 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990460 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2q29\" (UniqueName: \"kubernetes.io/projected/fd479a9b-8563-433e-aae2-ab0856594b3f-kube-api-access-c2q29\") pod \"migrator-59844c95c7-vnr5s\" (UID: \"fd479a9b-8563-433e-aae2-ab0856594b3f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990507 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990529 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9fa9e7-f243-4240-b739-babed8be646f-serving-cert\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990549 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990568 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990586 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990607 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-srv-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990622 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77hn\" (UniqueName: \"kubernetes.io/projected/3580a3b5-6640-41c2-b61f-863c299c59c6-kube-api-access-p77hn\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990640 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990656 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfhk\" (UniqueName: \"kubernetes.io/projected/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-kube-api-access-fxfhk\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990674 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79f27\" (UniqueName: \"kubernetes.io/projected/331bbbbd-b003-4190-b8a6-149cc2b81b39-kube-api-access-79f27\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990692 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-encryption-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990709 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-policies\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990726 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990741 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-serving-cert\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990758 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990760 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-images\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990773 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-socket-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990800 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-registration-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990826 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wp5r\" (UniqueName: \"kubernetes.io/projected/df1defe0-ab80-4262-a444-23043c0a5ff0-kube-api-access-6wp5r\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990846 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990872 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzf7f\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-kube-api-access-vzf7f\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-stats-auth\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990930 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10023d91-2be9-4ad9-a801-ef782f263aca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990954 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5thn\" (UniqueName: \"kubernetes.io/projected/21e2fdb8-e486-4a69-b9d4-00c1ce090296-kube-api-access-h5thn\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.990981 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfcc\" (UniqueName: \"kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991006 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991029 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991046 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-serving-cert\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991063 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991082 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991099 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991123 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82f107f-9b85-4fdd-911d-ca674a002dea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c118259f-65cb-437d-abda-b69562018d38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991172 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-default-certificate\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991217 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991240 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1b629b6-588e-44f8-9f64-613ba63f3313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991269 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991290 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwwg\" (UniqueName: \"kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991311 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-profile-collector-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991336 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b629b6-588e-44f8-9f64-613ba63f3313-config\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991354 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-node-bootstrap-token\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991372 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991388 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991405 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991424 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991442 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991461 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q659\" (UniqueName: \"kubernetes.io/projected/bb9fa9e7-f243-4240-b739-babed8be646f-kube-api-access-6q659\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991479 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-auth-proxy-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991496 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-csi-data-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991513 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991529 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/401b07cc-e3c3-4d71-9c55-c30f78a0335c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991544 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-client\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991561 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991579 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991595 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991603 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46691bb3-2fdb-402e-a030-4855bfd6684a-config\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-webhook-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991685 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d7e5c51-63bd-46b6-adef-459b93b18142-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991712 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991743 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-plugins-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991772 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnf8m\" (UniqueName: \"kubernetes.io/projected/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-kube-api-access-gnf8m\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991800 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56755333-86a4-4a45-b49a-c518575ad5f0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991828 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991851 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333657b1-ebc6-4900-93eb-7762fd0eeaac-service-ca-bundle\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991890 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-node-pullsecrets\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991918 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991947 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.991977 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992000 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-key\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992023 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992045 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmk5\" (UniqueName: \"kubernetes.io/projected/762f91d9-714d-4ba5-8c0c-f64498897186-kube-api-access-hnmk5\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992071 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992094 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df1defe0-ab80-4262-a444-23043c0a5ff0-cert\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992123 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-config\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992149 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.992177 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsj5x\" (UniqueName: \"kubernetes.io/projected/25e39933-042b-46a8-9e96-19acb0944e08-kube-api-access-vsj5x\") pod \"downloads-7954f5f757-ksqxd\" (UID: \"25e39933-042b-46a8-9e96-19acb0944e08\") " pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.993172 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.993606 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-trusted-ca\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.993928 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-proxy-tls\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994121 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-client\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994222 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994264 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994342 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e82f107f-9b85-4fdd-911d-ca674a002dea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994371 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/762f91d9-714d-4ba5-8c0c-f64498897186-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994401 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994431 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpzk\" (UniqueName: \"kubernetes.io/projected/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-kube-api-access-vgpzk\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994460 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-images\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994490 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-image-import-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994517 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrct\" (UniqueName: \"kubernetes.io/projected/c2a37542-d13b-431e-a375-69e3fc2e90eb-kube-api-access-dxrct\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994545 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qn2r\" (UniqueName: \"kubernetes.io/projected/333657b1-ebc6-4900-93eb-7762fd0eeaac-kube-api-access-5qn2r\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994572 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit-dir\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994598 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj4p\" (UniqueName: \"kubernetes.io/projected/8299442b-4dd3-4520-9e47-d461d0538647-kube-api-access-6sj4p\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994631 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54kkg\" (UniqueName: \"kubernetes.io/projected/401b07cc-e3c3-4d71-9c55-c30f78a0335c-kube-api-access-54kkg\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994655 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9602d005-3eaf-4e35-a19b-a406036cc295-trusted-ca\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994680 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8299442b-4dd3-4520-9e47-d461d0538647-proxy-tls\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994705 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-mountpoint-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-config\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.994993 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/6283b57b-899c-4d3d-b1a4-531a683d3853-kube-api-access-plp4r\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995024 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdk8\" (UniqueName: \"kubernetes.io/projected/788715b0-b06a-4f34-afb4-443a4c8ff7b1-kube-api-access-pvdk8\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995050 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-serving-cert\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995075 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48m9r\" (UniqueName: \"kubernetes.io/projected/b6418788-50b4-4982-bde2-dc7acd6728ed-kube-api-access-48m9r\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995099 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995123 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995147 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-srv-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995169 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-serving-cert\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995173 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995217 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstw6\" (UniqueName: \"kubernetes.io/projected/4d7e5c51-63bd-46b6-adef-459b93b18142-kube-api-access-nstw6\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995247 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6bb2ba9-5a6a-438b-960e-05170e0928a8-machine-approver-tls\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995275 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp9db\" (UniqueName: \"kubernetes.io/projected/c118259f-65cb-437d-abda-b69562018d38-kube-api-access-cp9db\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995315 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995339 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9602d005-3eaf-4e35-a19b-a406036cc295-metrics-tls\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995363 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3580a3b5-6640-41c2-b61f-863c299c59c6-metrics-tls\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995390 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995415 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3587430f-8bc8-4625-b262-e1d6f1c8454b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995442 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-metrics-certs\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995487 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.995906 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-image-import-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996095 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996145 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjj99\" (UniqueName: \"kubernetes.io/projected/56755333-86a4-4a45-b49a-c518575ad5f0-kube-api-access-wjj99\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996173 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996197 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prmkd\" (UniqueName: \"kubernetes.io/projected/3587430f-8bc8-4625-b262-e1d6f1c8454b-kube-api-access-prmkd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996241 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-cabundle\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996272 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996298 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3580a3b5-6640-41c2-b61f-863c299c59c6-config-volume\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996343 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-config\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996372 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnq8k\" (UniqueName: \"kubernetes.io/projected/29ab3a2b-59d9-4e16-915f-f76e1d215929-kube-api-access-bnq8k\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996401 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996405 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996427 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnsc\" (UniqueName: \"kubernetes.io/projected/2a405d09-41d7-423a-a5d0-5413839ee40b-kube-api-access-mjnsc\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996493 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbs6x\" (UniqueName: \"kubernetes.io/projected/e82f107f-9b85-4fdd-911d-ca674a002dea-kube-api-access-dbs6x\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996746 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-config\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.996955 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56755333-86a4-4a45-b49a-c518575ad5f0-config\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.997170 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:00 crc kubenswrapper[4767]: E0127 15:52:00.997490 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.49747268 +0000 UTC m=+143.886490203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.997698 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56755333-86a4-4a45-b49a-c518575ad5f0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.998364 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/401b07cc-e3c3-4d71-9c55-c30f78a0335c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:00 crc kubenswrapper[4767]: I0127 15:52:00.998671 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a37542-d13b-431e-a375-69e3fc2e90eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:00.999421 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-serving-cert\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:00.999593 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333657b1-ebc6-4900-93eb-7762fd0eeaac-service-ca-bundle\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:00.999712 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-encryption-config\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000046 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-node-pullsecrets\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000066 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d7e5c51-63bd-46b6-adef-459b93b18142-serving-cert\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000144 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4d7e5c51-63bd-46b6-adef-459b93b18142-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000366 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000421 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000409 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000511 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000943 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.000971 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ab3a2b-59d9-4e16-915f-f76e1d215929-metrics-tls\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.001034 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-stats-auth\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.001401 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.001440 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.002004 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.002602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2a37542-d13b-431e-a375-69e3fc2e90eb-audit-dir\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.002813 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.003216 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46691bb3-2fdb-402e-a030-4855bfd6684a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.003535 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9602d005-3eaf-4e35-a19b-a406036cc295-trusted-ca\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.003638 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.003641 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2a37542-d13b-431e-a375-69e3fc2e90eb-serving-cert\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.003914 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-service-ca-bundle\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.004075 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.004121 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb9fa9e7-f243-4240-b739-babed8be646f-config\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.005295 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.005400 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.005818 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb9fa9e7-f243-4240-b739-babed8be646f-serving-cert\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.005841 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.005893 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/401b07cc-e3c3-4d71-9c55-c30f78a0335c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006451 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006553 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006728 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006835 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6bb2ba9-5a6a-438b-960e-05170e0928a8-auth-proxy-config\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006869 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.006967 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3587430f-8bc8-4625-b262-e1d6f1c8454b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.007396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.007844 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9602d005-3eaf-4e35-a19b-a406036cc295-metrics-tls\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.008639 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c118259f-65cb-437d-abda-b69562018d38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.036566 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnfs\" (UniqueName: \"kubernetes.io/projected/ffc4f39e-e317-408b-8031-5cf9b9bb20cf-kube-api-access-wtnfs\") pod \"machine-config-controller-84d6567774-6l2vm\" (UID: \"ffc4f39e-e317-408b-8031-5cf9b9bb20cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.051821 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j28q\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.059047 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3587430f-8bc8-4625-b262-e1d6f1c8454b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.059427 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.063441 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-metrics-certs\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.063446 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10023d91-2be9-4ad9-a801-ef782f263aca-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.063546 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/333657b1-ebc6-4900-93eb-7762fd0eeaac-default-certificate\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.065132 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a6bb2ba9-5a6a-438b-960e-05170e0928a8-machine-approver-tls\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.065890 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10023d91-2be9-4ad9-a801-ef782f263aca-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.076964 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkm4f\" (UniqueName: \"kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f\") pod \"console-f9d7485db-vxkdk\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.081829 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.091715 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65lq\" (UniqueName: \"kubernetes.io/projected/a6bb2ba9-5a6a-438b-960e-05170e0928a8-kube-api-access-w65lq\") pod \"machine-approver-56656f9798-f9mz7\" (UID: \"a6bb2ba9-5a6a-438b-960e-05170e0928a8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.097822 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098133 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e82f107f-9b85-4fdd-911d-ca674a002dea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098198 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/762f91d9-714d-4ba5-8c0c-f64498897186-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098255 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-images\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098305 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj4p\" (UniqueName: \"kubernetes.io/projected/8299442b-4dd3-4520-9e47-d461d0538647-kube-api-access-6sj4p\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098323 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-mountpoint-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098350 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8299442b-4dd3-4520-9e47-d461d0538647-proxy-tls\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098383 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48m9r\" (UniqueName: \"kubernetes.io/projected/b6418788-50b4-4982-bde2-dc7acd6728ed-kube-api-access-48m9r\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098410 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/6283b57b-899c-4d3d-b1a4-531a683d3853-kube-api-access-plp4r\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098428 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvdk8\" (UniqueName: \"kubernetes.io/projected/788715b0-b06a-4f34-afb4-443a4c8ff7b1-kube-api-access-pvdk8\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098445 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-serving-cert\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098481 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098499 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098516 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-srv-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098551 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-serving-cert\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098590 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3580a3b5-6640-41c2-b61f-863c299c59c6-metrics-tls\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.098607 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.099459 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.599434517 +0000 UTC m=+143.988452040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099482 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-cabundle\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099501 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099517 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3580a3b5-6640-41c2-b61f-863c299c59c6-config-volume\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099547 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbs6x\" (UniqueName: \"kubernetes.io/projected/e82f107f-9b85-4fdd-911d-ca674a002dea-kube-api-access-dbs6x\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099579 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnsc\" (UniqueName: \"kubernetes.io/projected/2a405d09-41d7-423a-a5d0-5413839ee40b-kube-api-access-mjnsc\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099611 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-dir\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099648 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-encryption-config\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099675 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b6418788-50b4-4982-bde2-dc7acd6728ed-tmpfs\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099716 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-certs\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099738 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhp2\" (UniqueName: \"kubernetes.io/projected/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-kube-api-access-7jhp2\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099774 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48f59\" (UniqueName: \"kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100126 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-mountpoint-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.099825 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100437 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwbk\" (UniqueName: \"kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100459 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100476 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtcks\" (UniqueName: \"kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100505 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b629b6-588e-44f8-9f64-613ba63f3313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.100525 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxkn\" (UniqueName: \"kubernetes.io/projected/0ea03516-b574-4e25-8f8f-b45c358b5295-kube-api-access-qsxkn\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101134 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-cabundle\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101162 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-images\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101231 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101311 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-dir\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101836 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b6418788-50b4-4982-bde2-dc7acd6728ed-tmpfs\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.101942 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-config\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102007 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102046 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2q29\" (UniqueName: \"kubernetes.io/projected/fd479a9b-8563-433e-aae2-ab0856594b3f-kube-api-access-c2q29\") pod \"migrator-59844c95c7-vnr5s\" (UID: \"fd479a9b-8563-433e-aae2-ab0856594b3f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102078 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102098 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-srv-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102118 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p77hn\" (UniqueName: \"kubernetes.io/projected/3580a3b5-6640-41c2-b61f-863c299c59c6-kube-api-access-p77hn\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102137 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfhk\" (UniqueName: \"kubernetes.io/projected/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-kube-api-access-fxfhk\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79f27\" (UniqueName: \"kubernetes.io/projected/331bbbbd-b003-4190-b8a6-149cc2b81b39-kube-api-access-79f27\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102173 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-policies\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102232 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wp5r\" (UniqueName: \"kubernetes.io/projected/df1defe0-ab80-4262-a444-23043c0a5ff0-kube-api-access-6wp5r\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102259 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102274 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-socket-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102289 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-registration-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102320 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5thn\" (UniqueName: \"kubernetes.io/projected/21e2fdb8-e486-4a69-b9d4-00c1ce090296-kube-api-access-h5thn\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102344 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102359 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102390 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102408 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82f107f-9b85-4fdd-911d-ca674a002dea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102426 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1b629b6-588e-44f8-9f64-613ba63f3313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102466 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-profile-collector-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102508 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b629b6-588e-44f8-9f64-613ba63f3313-config\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102526 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-node-bootstrap-token\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102546 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102562 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102580 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102604 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-csi-data-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102623 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-client\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102643 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-webhook-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102677 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnf8m\" (UniqueName: \"kubernetes.io/projected/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-kube-api-access-gnf8m\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102700 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102717 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-plugins-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102737 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-key\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102752 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102777 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102799 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102814 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102832 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmk5\" (UniqueName: \"kubernetes.io/projected/762f91d9-714d-4ba5-8c0c-f64498897186-kube-api-access-hnmk5\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102848 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df1defe0-ab80-4262-a444-23043c0a5ff0-cert\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3580a3b5-6640-41c2-b61f-863c299c59c6-config-volume\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.102908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.103534 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-registration-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.103571 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.603541127 +0000 UTC m=+143.992558850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.103921 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.104568 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.104736 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-config\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.105014 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-serving-cert\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.105103 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-apiservice-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.105726 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.106504 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b629b6-588e-44f8-9f64-613ba63f3313-config\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.106610 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-csi-data-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.106656 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-plugins-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.107136 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.107222 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.108222 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e82f107f-9b85-4fdd-911d-ca674a002dea-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.108893 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8299442b-4dd3-4520-9e47-d461d0538647-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.109931 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.110396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.110936 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.110934 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111224 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111622 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6418788-50b4-4982-bde2-dc7acd6728ed-webhook-cert\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111638 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111621 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111691 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111723 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6283b57b-899c-4d3d-b1a4-531a683d3853-socket-dir\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111839 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.111916 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-node-bootstrap-token\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.112043 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.112266 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df1defe0-ab80-4262-a444-23043c0a5ff0-cert\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.112519 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-etcd-client\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.112618 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/331bbbbd-b003-4190-b8a6-149cc2b81b39-encryption-config\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.112709 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/788715b0-b06a-4f34-afb4-443a4c8ff7b1-signing-key\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.113568 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.113785 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8299442b-4dd3-4520-9e47-d461d0538647-proxy-tls\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.113838 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-serving-cert\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.113882 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114034 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-srv-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114065 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e2fdb8-e486-4a69-b9d4-00c1ce090296-certs\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114293 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/762f91d9-714d-4ba5-8c0c-f64498897186-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114421 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e82f107f-9b85-4fdd-911d-ca674a002dea-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114465 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114827 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.114987 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b629b6-588e-44f8-9f64-613ba63f3313-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.115607 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a405d09-41d7-423a-a5d0-5413839ee40b-srv-cert\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.116399 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.116393 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ea03516-b574-4e25-8f8f-b45c358b5295-profile-collector-cert\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.116608 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpsn2\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-kube-api-access-kpsn2\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.139125 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.142968 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46691bb3-2fdb-402e-a030-4855bfd6684a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d96md\" (UID: \"46691bb3-2fdb-402e-a030-4855bfd6684a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.155245 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpzk\" (UniqueName: \"kubernetes.io/projected/f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d-kube-api-access-vgpzk\") pod \"console-operator-58897d9998-64xhv\" (UID: \"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d\") " pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.175782 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.177678 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzf7f\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-kube-api-access-vzf7f\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.180490 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/331bbbbd-b003-4190-b8a6-149cc2b81b39-audit-policies\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.182973 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3580a3b5-6640-41c2-b61f-863c299c59c6-metrics-tls\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: W0127 15:52:01.192308 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6bb2ba9_5a6a_438b_960e_05170e0928a8.slice/crio-6083d13bf631c455bbd01ff9e28560fe24c9606c21ea7bd6de407e466e51c3d9 WatchSource:0}: Error finding container 6083d13bf631c455bbd01ff9e28560fe24c9606c21ea7bd6de407e466e51c3d9: Status 404 returned error can't find the container with id 6083d13bf631c455bbd01ff9e28560fe24c9606c21ea7bd6de407e466e51c3d9 Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.194657 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.204172 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.204302 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.704280608 +0000 UTC m=+144.093298131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.204600 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.205091 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.705079542 +0000 UTC m=+144.094097065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.212851 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstw6\" (UniqueName: \"kubernetes.io/projected/4d7e5c51-63bd-46b6-adef-459b93b18142-kube-api-access-nstw6\") pod \"openshift-config-operator-7777fb866f-l4l7n\" (UID: \"4d7e5c51-63bd-46b6-adef-459b93b18142\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.235021 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsj5x\" (UniqueName: \"kubernetes.io/projected/25e39933-042b-46a8-9e96-19acb0944e08-kube-api-access-vsj5x\") pod \"downloads-7954f5f757-ksqxd\" (UID: \"25e39933-042b-46a8-9e96-19acb0944e08\") " pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.255535 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwwg\" (UniqueName: \"kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg\") pod \"route-controller-manager-6576b87f9c-t67t2\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.281410 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrct\" (UniqueName: \"kubernetes.io/projected/c2a37542-d13b-431e-a375-69e3fc2e90eb-kube-api-access-dxrct\") pod \"apiserver-76f77b778f-d7nhv\" (UID: \"c2a37542-d13b-431e-a375-69e3fc2e90eb\") " pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.283225 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.292902 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.299496 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10023d91-2be9-4ad9-a801-ef782f263aca-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sq4g8\" (UID: \"10023d91-2be9-4ad9-a801-ef782f263aca\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.307362 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.307816 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.80778726 +0000 UTC m=+144.196804823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.311290 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.314026 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac80ca44-c0df-4f24-8177-5dc9cd10ea4f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cd7g2\" (UID: \"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.324728 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.334969 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.340664 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjj99\" (UniqueName: \"kubernetes.io/projected/56755333-86a4-4a45-b49a-c518575ad5f0-kube-api-access-wjj99\") pod \"machine-api-operator-5694c8668f-69lb2\" (UID: \"56755333-86a4-4a45-b49a-c518575ad5f0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.357440 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.358683 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bk226"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.374087 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.379536 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfcc\" (UniqueName: \"kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc\") pod \"controller-manager-879f6c89f-7m254\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.379747 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm"] Jan 27 15:52:01 crc kubenswrapper[4767]: W0127 15:52:01.391272 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc4f39e_e317_408b_8031_5cf9b9bb20cf.slice/crio-da99a7611a67f3a5ed29a6dca7fbb5ec4178e148fca7692c4258d197d335a74f WatchSource:0}: Error finding container da99a7611a67f3a5ed29a6dca7fbb5ec4178e148fca7692c4258d197d335a74f: Status 404 returned error can't find the container with id da99a7611a67f3a5ed29a6dca7fbb5ec4178e148fca7692c4258d197d335a74f Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.394958 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp9db\" (UniqueName: \"kubernetes.io/projected/c118259f-65cb-437d-abda-b69562018d38-kube-api-access-cp9db\") pod \"cluster-samples-operator-665b6dd947-4vdpc\" (UID: \"c118259f-65cb-437d-abda-b69562018d38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.395074 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.408030 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.409090 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.409526 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:01.909511571 +0000 UTC m=+144.298529094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.419463 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qn2r\" (UniqueName: \"kubernetes.io/projected/333657b1-ebc6-4900-93eb-7762fd0eeaac-kube-api-access-5qn2r\") pod \"router-default-5444994796-4n6ch\" (UID: \"333657b1-ebc6-4900-93eb-7762fd0eeaac\") " pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.436694 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q659\" (UniqueName: \"kubernetes.io/projected/bb9fa9e7-f243-4240-b739-babed8be646f-kube-api-access-6q659\") pod \"authentication-operator-69f744f599-4blj6\" (UID: \"bb9fa9e7-f243-4240-b739-babed8be646f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.459829 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prmkd\" (UniqueName: \"kubernetes.io/projected/3587430f-8bc8-4625-b262-e1d6f1c8454b-kube-api-access-prmkd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8mtbc\" (UID: \"3587430f-8bc8-4625-b262-e1d6f1c8454b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.478503 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7nhv"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.479441 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54kkg\" (UniqueName: \"kubernetes.io/projected/401b07cc-e3c3-4d71-9c55-c30f78a0335c-kube-api-access-54kkg\") pod \"openshift-apiserver-operator-796bbdcf4f-k4jsr\" (UID: \"401b07cc-e3c3-4d71-9c55-c30f78a0335c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.503268 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9602d005-3eaf-4e35-a19b-a406036cc295-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hgzzw\" (UID: \"9602d005-3eaf-4e35-a19b-a406036cc295\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.513804 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.514422 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.515014 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.01499577 +0000 UTC m=+144.404013293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.518793 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnq8k\" (UniqueName: \"kubernetes.io/projected/29ab3a2b-59d9-4e16-915f-f76e1d215929-kube-api-access-bnq8k\") pod \"dns-operator-744455d44c-fctcl\" (UID: \"29ab3a2b-59d9-4e16-915f-f76e1d215929\") " pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.544397 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48m9r\" (UniqueName: \"kubernetes.io/projected/b6418788-50b4-4982-bde2-dc7acd6728ed-kube-api-access-48m9r\") pod \"packageserver-d55dfcdfc-9tjmr\" (UID: \"b6418788-50b4-4982-bde2-dc7acd6728ed\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.555654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj4p\" (UniqueName: \"kubernetes.io/projected/8299442b-4dd3-4520-9e47-d461d0538647-kube-api-access-6sj4p\") pod \"machine-config-operator-74547568cd-mbflz\" (UID: \"8299442b-4dd3-4520-9e47-d461d0538647\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.558286 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.558490 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" Jan 27 15:52:01 crc kubenswrapper[4767]: W0127 15:52:01.575869 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a37542_d13b_431e_a375_69e3fc2e90eb.slice/crio-0cb8995b5c7832cc82f1a386e96265071758ef712084db3f7862e38f46f9e8da WatchSource:0}: Error finding container 0cb8995b5c7832cc82f1a386e96265071758ef712084db3f7862e38f46f9e8da: Status 404 returned error can't find the container with id 0cb8995b5c7832cc82f1a386e96265071758ef712084db3f7862e38f46f9e8da Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.576035 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.579441 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvdk8\" (UniqueName: \"kubernetes.io/projected/788715b0-b06a-4f34-afb4-443a4c8ff7b1-kube-api-access-pvdk8\") pod \"service-ca-9c57cc56f-lcrxj\" (UID: \"788715b0-b06a-4f34-afb4-443a4c8ff7b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.603750 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.603833 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.604393 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plp4r\" (UniqueName: \"kubernetes.io/projected/6283b57b-899c-4d3d-b1a4-531a683d3853-kube-api-access-plp4r\") pod \"csi-hostpathplugin-5pb8t\" (UID: \"6283b57b-899c-4d3d-b1a4-531a683d3853\") " pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.610483 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-64xhv"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.619608 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.620090 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.120076228 +0000 UTC m=+144.509093751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.624685 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.626274 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnsc\" (UniqueName: \"kubernetes.io/projected/2a405d09-41d7-423a-a5d0-5413839ee40b-kube-api-access-mjnsc\") pod \"olm-operator-6b444d44fb-vqjkg\" (UID: \"2a405d09-41d7-423a-a5d0-5413839ee40b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.638328 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbs6x\" (UniqueName: \"kubernetes.io/projected/e82f107f-9b85-4fdd-911d-ca674a002dea-kube-api-access-dbs6x\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qvr6\" (UID: \"e82f107f-9b85-4fdd-911d-ca674a002dea\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.646353 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.658116 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwbk\" (UniqueName: \"kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk\") pod \"collect-profiles-29492145-4vjsw\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.662530 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.665401 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.681067 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48f59\" (UniqueName: \"kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59\") pod \"marketplace-operator-79b997595-cbltv\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.689230 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.704509 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2q29\" (UniqueName: \"kubernetes.io/projected/fd479a9b-8563-433e-aae2-ab0856594b3f-kube-api-access-c2q29\") pod \"migrator-59844c95c7-vnr5s\" (UID: \"fd479a9b-8563-433e-aae2-ab0856594b3f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.717789 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhp2\" (UniqueName: \"kubernetes.io/projected/fd7217ad-af23-4d91-bc2d-8d54a9e5580f-kube-api-access-7jhp2\") pod \"service-ca-operator-777779d784-cml7v\" (UID: \"fd7217ad-af23-4d91-bc2d-8d54a9e5580f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.719081 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.721091 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.722635 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.222614902 +0000 UTC m=+144.611632415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.730847 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.749864 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.756037 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtcks\" (UniqueName: \"kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks\") pod \"oauth-openshift-558db77b4-tqzlw\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.757370 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.765834 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.772097 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79f27\" (UniqueName: \"kubernetes.io/projected/331bbbbd-b003-4190-b8a6-149cc2b81b39-kube-api-access-79f27\") pod \"apiserver-7bbb656c7d-sbq5r\" (UID: \"331bbbbd-b003-4190-b8a6-149cc2b81b39\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.785630 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wp5r\" (UniqueName: \"kubernetes.io/projected/df1defe0-ab80-4262-a444-23043c0a5ff0-kube-api-access-6wp5r\") pod \"ingress-canary-vxtlv\" (UID: \"df1defe0-ab80-4262-a444-23043c0a5ff0\") " pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.798233 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.804521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.807932 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxkn\" (UniqueName: \"kubernetes.io/projected/0ea03516-b574-4e25-8f8f-b45c358b5295-kube-api-access-qsxkn\") pod \"catalog-operator-68c6474976-v2v2x\" (UID: \"0ea03516-b574-4e25-8f8f-b45c358b5295\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.817791 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77hn\" (UniqueName: \"kubernetes.io/projected/3580a3b5-6640-41c2-b61f-863c299c59c6-kube-api-access-p77hn\") pod \"dns-default-px962\" (UID: \"3580a3b5-6640-41c2-b61f-863c299c59c6\") " pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.824070 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.824548 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.324531898 +0000 UTC m=+144.713549411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.831560 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.838826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1b629b6-588e-44f8-9f64-613ba63f3313-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-czw9w\" (UID: \"e1b629b6-588e-44f8-9f64-613ba63f3313\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.843398 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.850499 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.864819 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnf8m\" (UniqueName: \"kubernetes.io/projected/dea7593b-32bb-4d48-b47a-2cf9aa0d4185-kube-api-access-gnf8m\") pod \"control-plane-machine-set-operator-78cbb6b69f-7gdqg\" (UID: \"dea7593b-32bb-4d48-b47a-2cf9aa0d4185\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.865107 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.876420 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:01 crc kubenswrapper[4767]: W0127 15:52:01.879067 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46691bb3_2fdb_402e_a030_4855bfd6684a.slice/crio-d481241f0a68ab6c67cc80302a08befbb2dc3cc8bfd8d73f912b9ea8d2583f6a WatchSource:0}: Error finding container d481241f0a68ab6c67cc80302a08befbb2dc3cc8bfd8d73f912b9ea8d2583f6a: Status 404 returned error can't find the container with id d481241f0a68ab6c67cc80302a08befbb2dc3cc8bfd8d73f912b9ea8d2583f6a Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.879307 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.888521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.889745 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfhk\" (UniqueName: \"kubernetes.io/projected/bb803c2c-ff0b-4f4a-a566-d0ca1957ce56-kube-api-access-fxfhk\") pod \"package-server-manager-789f6589d5-j6mgl\" (UID: \"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.893255 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-px962" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.905506 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vxtlv" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.909032 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmk5\" (UniqueName: \"kubernetes.io/projected/762f91d9-714d-4ba5-8c0c-f64498897186-kube-api-access-hnmk5\") pod \"multus-admission-controller-857f4d67dd-gfdql\" (UID: \"762f91d9-714d-4ba5-8c0c-f64498897186\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.913778 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.921326 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n"] Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.924917 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.925189 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ksqxd"] Jan 27 15:52:01 crc kubenswrapper[4767]: E0127 15:52:01.925420 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.425400584 +0000 UTC m=+144.814418107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.928670 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5thn\" (UniqueName: \"kubernetes.io/projected/21e2fdb8-e486-4a69-b9d4-00c1ce090296-kube-api-access-h5thn\") pod \"machine-config-server-8tbrf\" (UID: \"21e2fdb8-e486-4a69-b9d4-00c1ce090296\") " pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.931667 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8tbrf" Jan 27 15:52:01 crc kubenswrapper[4767]: I0127 15:52:01.952960 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-69lb2"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.028017 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.028661 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.528649838 +0000 UTC m=+144.917667351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.064957 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.068571 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr"] Jan 27 15:52:02 crc kubenswrapper[4767]: W0127 15:52:02.078694 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56755333_86a4_4a45_b49a_c518575ad5f0.slice/crio-2b00f55f7f5243875c1acb1e7a50772b59bc7b8ff72b14ab769f19740be9d168 WatchSource:0}: Error finding container 2b00f55f7f5243875c1acb1e7a50772b59bc7b8ff72b14ab769f19740be9d168: Status 404 returned error can't find the container with id 2b00f55f7f5243875c1acb1e7a50772b59bc7b8ff72b14ab769f19740be9d168 Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.084141 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vxkdk" event={"ID":"90596a9c-3db0-47e4-a002-a97cd73f2ab9","Type":"ContainerStarted","Data":"e4c52c307ead7c3c48ef164c785632647295f714f3938cbbaaa8e2d05a805056"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.086249 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" event={"ID":"10023d91-2be9-4ad9-a801-ef782f263aca","Type":"ContainerStarted","Data":"ba4203847f19540683baf02f3d05a41e5a651b4c5d98a007c52e97a2c960b4d9"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.089825 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4n6ch" event={"ID":"333657b1-ebc6-4900-93eb-7762fd0eeaac","Type":"ContainerStarted","Data":"2ce7de8f856aea77aa030b38514a75278b96c1462e547566a33cc23370c9bfac"} Jan 27 15:52:02 crc kubenswrapper[4767]: W0127 15:52:02.091249 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6418788_50b4_4982_bde2_dc7acd6728ed.slice/crio-704e85d0d96bdc7db6cda83fed9fc943904bd096d07424af12fdf7cea60d83b0 WatchSource:0}: Error finding container 704e85d0d96bdc7db6cda83fed9fc943904bd096d07424af12fdf7cea60d83b0: Status 404 returned error can't find the container with id 704e85d0d96bdc7db6cda83fed9fc943904bd096d07424af12fdf7cea60d83b0 Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.094354 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.096358 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" event={"ID":"a6bb2ba9-5a6a-438b-960e-05170e0928a8","Type":"ContainerStarted","Data":"f0a7b05c060a9054ef24c2fa5e60323eb4f0a091da4b2a418e95c4334754c334"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.096408 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" event={"ID":"a6bb2ba9-5a6a-438b-960e-05170e0928a8","Type":"ContainerStarted","Data":"6083d13bf631c455bbd01ff9e28560fe24c9606c21ea7bd6de407e466e51c3d9"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.098401 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" event={"ID":"34c3a00d-6b69-4790-ba95-29ae01dd296f","Type":"ContainerStarted","Data":"4ead8b43fba68d45b2586a86ffcde833e7eb6b061e2a7ec5cecae34437f37a15"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.105475 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" event={"ID":"244d70a9-5aaf-495d-82bc-fcfaa9a5a984","Type":"ContainerStarted","Data":"53a7305345e9786ecad53cc1cb4d95568f2919b4bf9b4dd0ad93e7f2cd070a01"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.112055 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksqxd" event={"ID":"25e39933-042b-46a8-9e96-19acb0944e08","Type":"ContainerStarted","Data":"7de4452fc8677e3f1a7c1c5cf978dcd26181c4d79c407eda2cbeccd51666f33d"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.112417 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.115481 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" event={"ID":"4d7e5c51-63bd-46b6-adef-459b93b18142","Type":"ContainerStarted","Data":"f2b47ee1ca29a33658f741d8cc7533ad9602bf69d74980397b32cd2888800b10"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.121716 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.123437 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" event={"ID":"46691bb3-2fdb-402e-a030-4855bfd6684a","Type":"ContainerStarted","Data":"d481241f0a68ab6c67cc80302a08befbb2dc3cc8bfd8d73f912b9ea8d2583f6a"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.127534 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" event={"ID":"ffc4f39e-e317-408b-8031-5cf9b9bb20cf","Type":"ContainerStarted","Data":"cc111946257fd68afbeef4fb5d297f83eed788f259a18f425db07889481b882c"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.127581 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" event={"ID":"ffc4f39e-e317-408b-8031-5cf9b9bb20cf","Type":"ContainerStarted","Data":"da99a7611a67f3a5ed29a6dca7fbb5ec4178e148fca7692c4258d197d335a74f"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.128980 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.129426 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.629406881 +0000 UTC m=+145.018424424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.129926 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" event={"ID":"c2a37542-d13b-431e-a375-69e3fc2e90eb","Type":"ContainerStarted","Data":"0cb8995b5c7832cc82f1a386e96265071758ef712084db3f7862e38f46f9e8da"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.132147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-64xhv" event={"ID":"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d","Type":"ContainerStarted","Data":"80c6b8eadd2efb7db8a04a8623c9af7c392ecafb3c5f1d6ce5ed7436095f08f3"} Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.196450 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.199193 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.236491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.236876 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.736863147 +0000 UTC m=+145.125880670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.247896 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5pb8t"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.258443 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.349750 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.350726 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.850703619 +0000 UTC m=+145.239721142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: W0127 15:52:02.432100 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6283b57b_899c_4d3d_b1a4_531a683d3853.slice/crio-1bbeeaf57cc8c970832871e058504c40813b09a90c55f14ce7ee50b4b3e0615a WatchSource:0}: Error finding container 1bbeeaf57cc8c970832871e058504c40813b09a90c55f14ce7ee50b4b3e0615a: Status 404 returned error can't find the container with id 1bbeeaf57cc8c970832871e058504c40813b09a90c55f14ce7ee50b4b3e0615a Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.451778 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.452354 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:02.952338837 +0000 UTC m=+145.341356360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.462938 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.463950 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.552858 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.553595 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.053576683 +0000 UTC m=+145.442594206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.655410 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.655812 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.155783688 +0000 UTC m=+145.544801211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.695385 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4blj6"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.721306 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-fctcl"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.723296 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.756915 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.757410 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.257395355 +0000 UTC m=+145.646412878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: W0127 15:52:02.827649 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9fa9e7_f243_4240_b739_babed8be646f.slice/crio-b64a2a7433d174753597c6e7ce8fff675d1fc9901bcac5eb24a97e7de724b424 WatchSource:0}: Error finding container b64a2a7433d174753597c6e7ce8fff675d1fc9901bcac5eb24a97e7de724b424: Status 404 returned error can't find the container with id b64a2a7433d174753597c6e7ce8fff675d1fc9901bcac5eb24a97e7de724b424 Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.859170 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.860073 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.360054402 +0000 UTC m=+145.749071915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.908317 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.916709 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.922739 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lcrxj"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.961094 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r"] Jan 27 15:52:02 crc kubenswrapper[4767]: I0127 15:52:02.974248 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:02 crc kubenswrapper[4767]: E0127 15:52:02.974643 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.474620965 +0000 UTC m=+145.863638488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.075820 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.076082 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.576070977 +0000 UTC m=+145.965088500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.136975 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" event={"ID":"3587430f-8bc8-4625-b262-e1d6f1c8454b","Type":"ContainerStarted","Data":"28518c18cb67334f0fccf866d4fb3ba1bba72384fe0b7fb18364c0b755440080"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.138044 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.138380 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" event={"ID":"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f","Type":"ContainerStarted","Data":"543176651bb8603e8b4603c67e128e46c23ba2dcf3f2eb9253a31b52405bca1d"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.139796 4767 generic.go:334] "Generic (PLEG): container finished" podID="c2a37542-d13b-431e-a375-69e3fc2e90eb" containerID="1e55401c33161e432e48d10011440ccb805fa2eba5e5377a3f2ecb27e11060e0" exitCode=0 Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.140129 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" event={"ID":"c2a37542-d13b-431e-a375-69e3fc2e90eb","Type":"ContainerDied","Data":"1e55401c33161e432e48d10011440ccb805fa2eba5e5377a3f2ecb27e11060e0"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.141390 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" event={"ID":"29ab3a2b-59d9-4e16-915f-f76e1d215929","Type":"ContainerStarted","Data":"488e2181e6f77c9e56a6ba52cbd0433fd72c346f0e9f6c3064841900f4312c94"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.161568 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" event={"ID":"9d9edf4c-6df3-484c-9bb7-a344d8147aa6","Type":"ContainerStarted","Data":"703f253827bdfdf33b79d0813a0da80faf5e0b5a00a0e21d64ea357858f3b3e0"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.163181 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.167188 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" event={"ID":"331bbbbd-b003-4190-b8a6-149cc2b81b39","Type":"ContainerStarted","Data":"062eace2a44b2c92f7b189514d5e671e704d3be09bba532dea6995207c31410a"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.168050 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.168994 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" event={"ID":"c118259f-65cb-437d-abda-b69562018d38","Type":"ContainerStarted","Data":"708c60ae0e707c5eef5b9a13a0931d7347190a9758076aa3d99e1283b38a8012"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.171228 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" event={"ID":"e82f107f-9b85-4fdd-911d-ca674a002dea","Type":"ContainerStarted","Data":"ad12714cf546ddc3bcb308e75461476d0a721c22aa5f6975748f39d2c9a7931a"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.173319 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" event={"ID":"244d70a9-5aaf-495d-82bc-fcfaa9a5a984","Type":"ContainerStarted","Data":"d17d83e7702c4ac177cb39973fce8230bed69c12802422f31ecd87bf485ce016"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.176886 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.177120 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.677101757 +0000 UTC m=+146.066119280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.177189 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.178276 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" event={"ID":"b6418788-50b4-4982-bde2-dc7acd6728ed","Type":"ContainerStarted","Data":"704e85d0d96bdc7db6cda83fed9fc943904bd096d07424af12fdf7cea60d83b0"} Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.179379 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.679365663 +0000 UTC m=+146.068383196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.180508 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8tbrf" event={"ID":"21e2fdb8-e486-4a69-b9d4-00c1ce090296","Type":"ContainerStarted","Data":"a5f70c1f2d4ed78c4fdcd06acf4c338def70e344008d210913eb4aab977dad78"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.181510 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" event={"ID":"401b07cc-e3c3-4d71-9c55-c30f78a0335c","Type":"ContainerStarted","Data":"313c0f04874259c0e99fa26a939dfa1376a65c224dae2f49af38e96cc0d1bbec"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.191487 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4n6ch" event={"ID":"333657b1-ebc6-4900-93eb-7762fd0eeaac","Type":"ContainerStarted","Data":"5d5cbb40f9309bdc082ea2f8853eda48ee56d934df57b95aa5d47aef5db6a7a6"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.194825 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" event={"ID":"fd479a9b-8563-433e-aae2-ab0856594b3f","Type":"ContainerStarted","Data":"88d6f08d2bd74ee3fdf8f48c0608d524d5390a6a9fa8300a7e5b2bc325a69964"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.195865 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vxtlv"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.199292 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" event={"ID":"9602d005-3eaf-4e35-a19b-a406036cc295","Type":"ContainerStarted","Data":"29a95f9e257a62580d07aef36f671d802b81b931e66cf949ead8df3d081ee092"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.201703 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-64xhv" event={"ID":"f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d","Type":"ContainerStarted","Data":"d54c6c5683912985a77c65bf9822f0225deba3b4b60ac71da454d9cabb711f12"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.201986 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.203854 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" event={"ID":"bb9fa9e7-f243-4240-b739-babed8be646f","Type":"ContainerStarted","Data":"b64a2a7433d174753597c6e7ce8fff675d1fc9901bcac5eb24a97e7de724b424"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.204956 4767 patch_prober.go:28] interesting pod/console-operator-58897d9998-64xhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.204995 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-64xhv" podUID="f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.208151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vxkdk" event={"ID":"90596a9c-3db0-47e4-a002-a97cd73f2ab9","Type":"ContainerStarted","Data":"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.209775 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" event={"ID":"4d7e5c51-63bd-46b6-adef-459b93b18142","Type":"ContainerStarted","Data":"339ad0e259ccab59b719369733ce1b981027bb26b3fd3d0ff77a64bb9ac638a6"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.210969 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" event={"ID":"10023d91-2be9-4ad9-a801-ef782f263aca","Type":"ContainerStarted","Data":"b0ae87cf7df1b72c288bfc51f2a75b06462f95e4c0f2a0753e9101d9931e9816"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.223030 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" event={"ID":"6283b57b-899c-4d3d-b1a4-531a683d3853","Type":"ContainerStarted","Data":"1bbeeaf57cc8c970832871e058504c40813b09a90c55f14ce7ee50b4b3e0615a"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.225003 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cml7v"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.225934 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tqzlw"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.239065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" event={"ID":"56755333-86a4-4a45-b49a-c518575ad5f0","Type":"ContainerStarted","Data":"2b00f55f7f5243875c1acb1e7a50772b59bc7b8ff72b14ab769f19740be9d168"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.249190 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" event={"ID":"34c3a00d-6b69-4790-ba95-29ae01dd296f","Type":"ContainerStarted","Data":"26198e480ae52e3c31055d523eee5ce991004cd80a99480be6c5e5b9fd089f55"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.249676 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.250711 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.250756 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.252816 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" event={"ID":"788715b0-b06a-4f34-afb4-443a4c8ff7b1","Type":"ContainerStarted","Data":"d5bd862bce62ba58ccc554f38429d560e55e8a05b6f408d21308a8290972a439"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.267516 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" event={"ID":"ffc4f39e-e317-408b-8031-5cf9b9bb20cf","Type":"ContainerStarted","Data":"43a7aaf429b82e9796a64ab3aeaeaeb09a0576db76d389f9899405c0fb3f6055"} Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.279458 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.279599 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.779577739 +0000 UTC m=+146.168595262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.279743 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.280642 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.78063311 +0000 UTC m=+146.169650633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.295807 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.299686 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.312560 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.322400 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-px962"] Jan 27 15:52:03 crc kubenswrapper[4767]: W0127 15:52:03.334617 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8299442b_4dd3_4520_9e47_d461d0538647.slice/crio-fe7a0ccf9394be6c8917623eb72b2d0a78103aa2305f6abe1bce8bc6168c5dc7 WatchSource:0}: Error finding container fe7a0ccf9394be6c8917623eb72b2d0a78103aa2305f6abe1bce8bc6168c5dc7: Status 404 returned error can't find the container with id fe7a0ccf9394be6c8917623eb72b2d0a78103aa2305f6abe1bce8bc6168c5dc7 Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.381145 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.382774 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.882755432 +0000 UTC m=+146.271772955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.408857 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.419825 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gfdql"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.424665 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w"] Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.438750 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podStartSLOduration=124.438729885 podStartE2EDuration="2m4.438729885s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:03.43787019 +0000 UTC m=+145.826887723" watchObservedRunningTime="2026-01-27 15:52:03.438729885 +0000 UTC m=+145.827747408" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.484146 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:03.984033949 +0000 UTC m=+146.373051472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.483392 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.509011 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6l2vm" podStartSLOduration=125.508983213 podStartE2EDuration="2m5.508983213s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:03.478484008 +0000 UTC m=+145.867501531" watchObservedRunningTime="2026-01-27 15:52:03.508983213 +0000 UTC m=+145.898000756" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.509971 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bk226" podStartSLOduration=125.509960441 podStartE2EDuration="2m5.509960441s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:03.508646363 +0000 UTC m=+145.897663886" watchObservedRunningTime="2026-01-27 15:52:03.509960441 +0000 UTC m=+145.898977964" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.550811 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sq4g8" podStartSLOduration=125.550787815 podStartE2EDuration="2m5.550787815s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:03.548005365 +0000 UTC m=+145.937022888" watchObservedRunningTime="2026-01-27 15:52:03.550787815 +0000 UTC m=+145.939805338" Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.588182 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.588336 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.088273313 +0000 UTC m=+146.477290836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.589895 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.590323 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.090313252 +0000 UTC m=+146.479330765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.595997 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-64xhv" podStartSLOduration=125.595959215 podStartE2EDuration="2m5.595959215s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:03.589495208 +0000 UTC m=+145.978512731" watchObservedRunningTime="2026-01-27 15:52:03.595959215 +0000 UTC m=+145.984976749" Jan 27 15:52:03 crc kubenswrapper[4767]: W0127 15:52:03.627338 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1b629b6_588e_44f8_9f64_613ba63f3313.slice/crio-333d91fabd929a208ada6f18280db312fb23cbe1227244f24081f77a9bc44609 WatchSource:0}: Error finding container 333d91fabd929a208ada6f18280db312fb23cbe1227244f24081f77a9bc44609: Status 404 returned error can't find the container with id 333d91fabd929a208ada6f18280db312fb23cbe1227244f24081f77a9bc44609 Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.690387 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.690523 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.190501568 +0000 UTC m=+146.579519091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.690927 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.691283 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.19127236 +0000 UTC m=+146.580289883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.793372 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.793531 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.293509345 +0000 UTC m=+146.682526868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.793565 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.793863 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.293856055 +0000 UTC m=+146.682873578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.898870 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.899077 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.399046266 +0000 UTC m=+146.788063789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:03 crc kubenswrapper[4767]: I0127 15:52:03.899190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:03 crc kubenswrapper[4767]: E0127 15:52:03.899497 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.399487449 +0000 UTC m=+146.788505042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.001743 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.002291 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.50226566 +0000 UTC m=+146.891283183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.103896 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.104507 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.604494165 +0000 UTC m=+146.993511688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.206374 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.206586 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.706557234 +0000 UTC m=+147.095574757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.206838 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.207177 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.707165271 +0000 UTC m=+147.096182874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.274992 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" event={"ID":"56755333-86a4-4a45-b49a-c518575ad5f0","Type":"ContainerStarted","Data":"7d781376a9c183480d3cbdce3db87aeacc159ed4ce5d6889f48f507b3bdb6e22"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.279398 4767 generic.go:334] "Generic (PLEG): container finished" podID="4d7e5c51-63bd-46b6-adef-459b93b18142" containerID="339ad0e259ccab59b719369733ce1b981027bb26b3fd3d0ff77a64bb9ac638a6" exitCode=0 Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.279516 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" event={"ID":"4d7e5c51-63bd-46b6-adef-459b93b18142","Type":"ContainerDied","Data":"339ad0e259ccab59b719369733ce1b981027bb26b3fd3d0ff77a64bb9ac638a6"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.282911 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" event={"ID":"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c","Type":"ContainerStarted","Data":"6e02ea4813755b72eff5d622bdaca2c3a9c1cdf7ae5e71c7e5a460915de755a8"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.284970 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" event={"ID":"3587430f-8bc8-4625-b262-e1d6f1c8454b","Type":"ContainerStarted","Data":"a0fcb146aeb5e8e1920e51dd574609451a8023bb9522de630589424c9a62580c"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.286250 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksqxd" event={"ID":"25e39933-042b-46a8-9e96-19acb0944e08","Type":"ContainerStarted","Data":"cbf24be75441564d52bf7e30a64cc331d4ca89c0fcc7e0dc90b39ede7cb56550"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.287396 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.288606 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" event={"ID":"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0","Type":"ContainerStarted","Data":"a79d1795c9cf5f608126ae01b7e4dc4e607d07ad939724d29f091b4c6e7b39fb"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.288836 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.288973 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.289661 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" event={"ID":"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56","Type":"ContainerStarted","Data":"5c38c04a426d016879897fd26c1067c4e8d2c8be763ccf74c3e62fc5f3453938"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.291083 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" event={"ID":"762f91d9-714d-4ba5-8c0c-f64498897186","Type":"ContainerStarted","Data":"64b6f525b2fedc4672dac89246dcd1fa86b18d8270c97d5743b4f1be78235c18"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.293646 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px962" event={"ID":"3580a3b5-6640-41c2-b61f-863c299c59c6","Type":"ContainerStarted","Data":"a5aa162d4cc82c93c8f653c113498d4fe51baa41a46765f84c7ed44c585be741"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.297659 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" event={"ID":"2a405d09-41d7-423a-a5d0-5413839ee40b","Type":"ContainerStarted","Data":"6b3c3c03dfdf3da22bbe945b523ef00cbcad376e78f5e55019bc805090cbb4d9"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.302220 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" event={"ID":"b6418788-50b4-4982-bde2-dc7acd6728ed","Type":"ContainerStarted","Data":"aebb07d786cc0a287dd5d1db6bc8b11c17d4fb450d7f2012640bca873ba92a4e"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.303442 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8tbrf" event={"ID":"21e2fdb8-e486-4a69-b9d4-00c1ce090296","Type":"ContainerStarted","Data":"e4f1f34a8bc52f688fc3ae04853807fb7a77f4713b488f63aeed9f0056f94a98"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.305518 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" event={"ID":"e1b629b6-588e-44f8-9f64-613ba63f3313","Type":"ContainerStarted","Data":"333d91fabd929a208ada6f18280db312fb23cbe1227244f24081f77a9bc44609"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.306187 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" event={"ID":"dea7593b-32bb-4d48-b47a-2cf9aa0d4185","Type":"ContainerStarted","Data":"23b88785a0d8918bc6fe086455847814b8e8baf39c682b73733272ffa0c7e31d"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.306929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" event={"ID":"fd7217ad-af23-4d91-bc2d-8d54a9e5580f","Type":"ContainerStarted","Data":"897e2f9899ca7e4c49cd6a4879a042cd090f1e67c482e81df9d301ce9fd6764e"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.307294 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.307537 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.807503122 +0000 UTC m=+147.196520665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.307614 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.307929 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.807912734 +0000 UTC m=+147.196930317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.309114 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" event={"ID":"401b07cc-e3c3-4d71-9c55-c30f78a0335c","Type":"ContainerStarted","Data":"de0a02b89faedc36f98540ebebfe59dbe3de3de3aa7e043087ccb24bdd6f8feb"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.310469 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" event={"ID":"c118259f-65cb-437d-abda-b69562018d38","Type":"ContainerStarted","Data":"1821dc422caa867e36b76e11eb390faeb0d2e901acc35f7deb1adad34e4eaf68"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.315231 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" event={"ID":"a6bb2ba9-5a6a-438b-960e-05170e0928a8","Type":"ContainerStarted","Data":"f009bc6635676ca681499a307ce9438579f77af9a304d1520d30b4b69bf07ea4"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.317585 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ksqxd" podStartSLOduration=126.317570194 podStartE2EDuration="2m6.317570194s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.31570841 +0000 UTC m=+146.704725933" watchObservedRunningTime="2026-01-27 15:52:04.317570194 +0000 UTC m=+146.706587717" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.320528 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" event={"ID":"9bc30087-3b0d-441b-b384-853b7e1003ad","Type":"ContainerStarted","Data":"b067bfa0872a5da37affc6eb98c088d2d27e9dfca3b4b7f8fbd83628c377aa2f"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.321832 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" event={"ID":"8299442b-4dd3-4520-9e47-d461d0538647","Type":"ContainerStarted","Data":"fe7a0ccf9394be6c8917623eb72b2d0a78103aa2305f6abe1bce8bc6168c5dc7"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.324073 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" event={"ID":"ac80ca44-c0df-4f24-8177-5dc9cd10ea4f","Type":"ContainerStarted","Data":"2bc0b828a69b5da2a04f9b5524a8f5fe349df76f30762a87964bfc90837a80ed"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.333214 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" event={"ID":"9d9edf4c-6df3-484c-9bb7-a344d8147aa6","Type":"ContainerStarted","Data":"721bdd3be33645608399716428d5c6efade9c188c0d547618c1141db7d4a606e"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.333264 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.336380 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.336446 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.337830 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8tbrf" podStartSLOduration=6.337815251 podStartE2EDuration="6.337815251s" podCreationTimestamp="2026-01-27 15:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.336695738 +0000 UTC m=+146.725713271" watchObservedRunningTime="2026-01-27 15:52:04.337815251 +0000 UTC m=+146.726832774" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.338982 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" event={"ID":"46691bb3-2fdb-402e-a030-4855bfd6684a","Type":"ContainerStarted","Data":"3c0a674b1053477e9adca8219fa30125d7bfc9f04b23cf113f7524a30a0a22e9"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.341450 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxtlv" event={"ID":"df1defe0-ab80-4262-a444-23043c0a5ff0","Type":"ContainerStarted","Data":"8223b716adb00509f7267ec6678fe39d9f0b32dadf69a13d31a8c14d737b0b39"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.356009 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-k4jsr" podStartSLOduration=126.355991078 podStartE2EDuration="2m6.355991078s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.354276228 +0000 UTC m=+146.743293751" watchObservedRunningTime="2026-01-27 15:52:04.355991078 +0000 UTC m=+146.745008601" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.360572 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" event={"ID":"0ea03516-b574-4e25-8f8f-b45c358b5295","Type":"ContainerStarted","Data":"31becf8a06f9e3e7febb6356e47dfd05d37003629e8ed1661c200aedc2237d78"} Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.365308 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.365376 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.377919 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-f9mz7" podStartSLOduration=126.377903923 podStartE2EDuration="2m6.377903923s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.377223214 +0000 UTC m=+146.766240737" watchObservedRunningTime="2026-01-27 15:52:04.377903923 +0000 UTC m=+146.766921446" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.401706 4767 patch_prober.go:28] interesting pod/console-operator-58897d9998-64xhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.401817 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-64xhv" podUID="f5f2dc8d-8525-4033-bd8d-3bc73fcbf41d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.404759 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podStartSLOduration=126.404724131 podStartE2EDuration="2m6.404724131s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.403827645 +0000 UTC m=+146.792845178" watchObservedRunningTime="2026-01-27 15:52:04.404724131 +0000 UTC m=+146.793741674" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.411945 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.412757 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:04.912708453 +0000 UTC m=+147.301725996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.507257 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vxkdk" podStartSLOduration=126.507234544 podStartE2EDuration="2m6.507234544s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.465918606 +0000 UTC m=+146.854936129" watchObservedRunningTime="2026-01-27 15:52:04.507234544 +0000 UTC m=+146.896252087" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.513947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.516022 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.016005119 +0000 UTC m=+147.405022702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.545498 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d96md" podStartSLOduration=126.545482184 podStartE2EDuration="2m6.545482184s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.545393651 +0000 UTC m=+146.934411174" watchObservedRunningTime="2026-01-27 15:52:04.545482184 +0000 UTC m=+146.934499707" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.545624 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4n6ch" podStartSLOduration=126.545619788 podStartE2EDuration="2m6.545619788s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:04.52053082 +0000 UTC m=+146.909548353" watchObservedRunningTime="2026-01-27 15:52:04.545619788 +0000 UTC m=+146.934637311" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.616034 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.616232 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.116215115 +0000 UTC m=+147.505232638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.616413 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.616769 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.116761391 +0000 UTC m=+147.505778914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.690134 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.692562 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.692634 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.717316 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.717459 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.217436541 +0000 UTC m=+147.606454064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.717734 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.718146 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.218126841 +0000 UTC m=+147.607144384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.819273 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.819486 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.3194548 +0000 UTC m=+147.708472323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.819716 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.820063 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.320047447 +0000 UTC m=+147.709065040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.920694 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.920838 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.420810429 +0000 UTC m=+147.809827952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:04 crc kubenswrapper[4767]: I0127 15:52:04.920915 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:04 crc kubenswrapper[4767]: E0127 15:52:04.921237 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.421228082 +0000 UTC m=+147.810245605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.022511 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.022747 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.522705865 +0000 UTC m=+147.911723388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.023151 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.023518 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.523501458 +0000 UTC m=+147.912518981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.124885 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.125098 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.625052983 +0000 UTC m=+148.014070506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.125656 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.125966 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.625946579 +0000 UTC m=+148.014964102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.227297 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.227749 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.727728561 +0000 UTC m=+148.116746094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.328654 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.329295 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.829278097 +0000 UTC m=+148.218295620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.395779 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vxtlv" event={"ID":"df1defe0-ab80-4262-a444-23043c0a5ff0","Type":"ContainerStarted","Data":"daca45d870e707c2d13249492930b30e4fd431c551b0d3a127adf10278088c5b"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.398482 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" event={"ID":"9602d005-3eaf-4e35-a19b-a406036cc295","Type":"ContainerStarted","Data":"3763b9f814a20d23b409dcb905730920fa6b885f441cbceeab4de8d6f614a7e4"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.399923 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" event={"ID":"fd7217ad-af23-4d91-bc2d-8d54a9e5580f","Type":"ContainerStarted","Data":"deaa31a329b4ca5f361ae237b3f5ae53b3454df2bdc10163bb413f78cfd3bcc5"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.402356 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" event={"ID":"788715b0-b06a-4f34-afb4-443a4c8ff7b1","Type":"ContainerStarted","Data":"da5c356ced0e71439649abd79738b6655688708b0f00c667299b4e99d87fbb88"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.404087 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" event={"ID":"e82f107f-9b85-4fdd-911d-ca674a002dea","Type":"ContainerStarted","Data":"44aff4f5f16bb683aaa5694647f5dbdde4fdee8c485a8538fdf8fe794f592333"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.406060 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" event={"ID":"9bc30087-3b0d-441b-b384-853b7e1003ad","Type":"ContainerStarted","Data":"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.406412 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.408962 4767 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tqzlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.409030 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.409517 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" event={"ID":"e1b629b6-588e-44f8-9f64-613ba63f3313","Type":"ContainerStarted","Data":"1dde332289ce7bb3dd05409716c592cb89a887d0d11fa52e6efb0a7738d18206"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.417652 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" event={"ID":"29ab3a2b-59d9-4e16-915f-f76e1d215929","Type":"ContainerStarted","Data":"1e5958c6fb80aad0623eca553cbf2384760be714cc6ec89aba27ddacd4e4f0ad"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.421839 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" event={"ID":"8299442b-4dd3-4520-9e47-d461d0538647","Type":"ContainerStarted","Data":"d8f1936aecb1d24ba4d3766698e29661eacb63cd424e79395428047de04a7ad3"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.427938 4767 generic.go:334] "Generic (PLEG): container finished" podID="331bbbbd-b003-4190-b8a6-149cc2b81b39" containerID="363998721cc35166d3814620e8163c4a5140eaf113523a45cc1619d88ad2f412" exitCode=0 Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.428006 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" event={"ID":"331bbbbd-b003-4190-b8a6-149cc2b81b39","Type":"ContainerDied","Data":"363998721cc35166d3814620e8163c4a5140eaf113523a45cc1619d88ad2f412"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.436181 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.437381 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:05.937353341 +0000 UTC m=+148.326371044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.441926 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-czw9w" podStartSLOduration=127.441904163 podStartE2EDuration="2m7.441904163s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.441587034 +0000 UTC m=+147.830604577" watchObservedRunningTime="2026-01-27 15:52:05.441904163 +0000 UTC m=+147.830921686" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.443679 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vxtlv" podStartSLOduration=7.443669654 podStartE2EDuration="7.443669654s" podCreationTimestamp="2026-01-27 15:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.424645322 +0000 UTC m=+147.813662845" watchObservedRunningTime="2026-01-27 15:52:05.443669654 +0000 UTC m=+147.832687177" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.456949 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" event={"ID":"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56","Type":"ContainerStarted","Data":"8ff178435fb2066aa9f19a9b069c964f3f7f519b6982feedea44961ac6af3125"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.477655 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" event={"ID":"0ea03516-b574-4e25-8f8f-b45c358b5295","Type":"ContainerStarted","Data":"9d4a51e2dde3e930ae06b738936d3348984cc8522140de23b983d291cb187189"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.478699 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.484921 4767 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v2v2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.484975 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" podUID="0ea03516-b574-4e25-8f8f-b45c358b5295" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.485319 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" event={"ID":"fd479a9b-8563-433e-aae2-ab0856594b3f","Type":"ContainerStarted","Data":"9e535c7fcda4445013bdaf961346c120e6898dfabf9e07a34afaabc6e66c27cd"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.494181 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lcrxj" podStartSLOduration=126.494159449 podStartE2EDuration="2m6.494159449s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.489549855 +0000 UTC m=+147.878567378" watchObservedRunningTime="2026-01-27 15:52:05.494159449 +0000 UTC m=+147.883176972" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.494806 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" podStartSLOduration=127.494800937 podStartE2EDuration="2m7.494800937s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.466237749 +0000 UTC m=+147.855255292" watchObservedRunningTime="2026-01-27 15:52:05.494800937 +0000 UTC m=+147.883818460" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.506792 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qvr6" podStartSLOduration=127.506773674 podStartE2EDuration="2m7.506773674s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.506734603 +0000 UTC m=+147.895752156" watchObservedRunningTime="2026-01-27 15:52:05.506773674 +0000 UTC m=+147.895791197" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.518285 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px962" event={"ID":"3580a3b5-6640-41c2-b61f-863c299c59c6","Type":"ContainerStarted","Data":"c176b6a77e88d0a2e443043b8690e453c5cf9144042ddbd531de0787d14f9e73"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.521170 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" event={"ID":"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0","Type":"ContainerStarted","Data":"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.522303 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.523433 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cbltv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.523477 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.527039 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" event={"ID":"2a405d09-41d7-423a-a5d0-5413839ee40b","Type":"ContainerStarted","Data":"453095fbdcbd5851175ab1fb55072a3729fa7fabad51a4b2328fed6cfb82d4a9"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.527230 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.530791 4767 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vqjkg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.530840 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" podUID="2a405d09-41d7-423a-a5d0-5413839ee40b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.532229 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cml7v" podStartSLOduration=126.532176771 podStartE2EDuration="2m6.532176771s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.530296537 +0000 UTC m=+147.919314070" watchObservedRunningTime="2026-01-27 15:52:05.532176771 +0000 UTC m=+147.921194294" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.538919 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.544603 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.044574171 +0000 UTC m=+148.433591694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.547005 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" event={"ID":"c2a37542-d13b-431e-a375-69e3fc2e90eb","Type":"ContainerStarted","Data":"91271a04d9f6ec6e25358001d762bcdb96d4a763ec4af415e8261d54a35f5fa8"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.551505 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" event={"ID":"56755333-86a4-4a45-b49a-c518575ad5f0","Type":"ContainerStarted","Data":"e4647bd0bbf96417d832ca3ab53147def33c5a5129fa8dc3872caddedc05090d"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.554314 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" event={"ID":"dea7593b-32bb-4d48-b47a-2cf9aa0d4185","Type":"ContainerStarted","Data":"1c4b2f0a4592abfe0d3e4d08cab2a30fd47e3fd9014e0b7c822260e21900b83d"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.555804 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" podStartSLOduration=127.555784646 podStartE2EDuration="2m7.555784646s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.550919205 +0000 UTC m=+147.939936728" watchObservedRunningTime="2026-01-27 15:52:05.555784646 +0000 UTC m=+147.944802169" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.557551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" event={"ID":"762f91d9-714d-4ba5-8c0c-f64498897186","Type":"ContainerStarted","Data":"c87511cb922eb94539016839224b2927a0e9cb8e7b2db07dd42679b5acd1ed99"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.577506 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" event={"ID":"4d7e5c51-63bd-46b6-adef-459b93b18142","Type":"ContainerStarted","Data":"6101584b43f4b20292d0b077317bd43949e01734c1c01a21a928e56039182498"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.600779 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" event={"ID":"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c","Type":"ContainerStarted","Data":"7a5d02e78be533a25699f9ad1f67bf9596656a6db315be805ceacceb5b1f5507"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.605564 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" event={"ID":"c118259f-65cb-437d-abda-b69562018d38","Type":"ContainerStarted","Data":"c490f97c8b986373a95fdaee527b6f9cfa04b1a2a890076cbe8e9f57071a1780"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.606571 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" podStartSLOduration=126.606490307 podStartE2EDuration="2m6.606490307s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.597588028 +0000 UTC m=+147.986605561" watchObservedRunningTime="2026-01-27 15:52:05.606490307 +0000 UTC m=+147.995507830" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.608161 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" event={"ID":"bb9fa9e7-f243-4240-b739-babed8be646f","Type":"ContainerStarted","Data":"d05a79a5061c3c6827a3d78b4210a12c52b48f28ef432c424edf7c987f577b7f"} Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613442 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613508 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613666 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613697 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613763 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.613783 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.640189 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.642119 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.142098479 +0000 UTC m=+148.531116062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.658547 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" podStartSLOduration=127.658524256 podStartE2EDuration="2m7.658524256s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.654785297 +0000 UTC m=+148.043802820" watchObservedRunningTime="2026-01-27 15:52:05.658524256 +0000 UTC m=+148.047541779" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.679052 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" podStartSLOduration=126.6790316 podStartE2EDuration="2m6.6790316s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.678696981 +0000 UTC m=+148.067714514" watchObservedRunningTime="2026-01-27 15:52:05.6790316 +0000 UTC m=+148.068049123" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.693885 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.693938 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.727170 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cd7g2" podStartSLOduration=127.727148506 podStartE2EDuration="2m7.727148506s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.724577831 +0000 UTC m=+148.113595364" watchObservedRunningTime="2026-01-27 15:52:05.727148506 +0000 UTC m=+148.116166029" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.741478 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.744073 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.244053966 +0000 UTC m=+148.633071569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.760264 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8mtbc" podStartSLOduration=127.760242756 podStartE2EDuration="2m7.760242756s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.754959173 +0000 UTC m=+148.143976706" watchObservedRunningTime="2026-01-27 15:52:05.760242756 +0000 UTC m=+148.149260299" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.777110 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-69lb2" podStartSLOduration=127.777092515 podStartE2EDuration="2m7.777092515s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.776668432 +0000 UTC m=+148.165685965" watchObservedRunningTime="2026-01-27 15:52:05.777092515 +0000 UTC m=+148.166110038" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.794546 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4blj6" podStartSLOduration=127.79452852 podStartE2EDuration="2m7.79452852s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.793311055 +0000 UTC m=+148.182328578" watchObservedRunningTime="2026-01-27 15:52:05.79452852 +0000 UTC m=+148.183546043" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.816463 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7gdqg" podStartSLOduration=127.816447346 podStartE2EDuration="2m7.816447346s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.814573922 +0000 UTC m=+148.203591435" watchObservedRunningTime="2026-01-27 15:52:05.816447346 +0000 UTC m=+148.205464869" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.840090 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" podStartSLOduration=126.840073881 podStartE2EDuration="2m6.840073881s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:05.838258608 +0000 UTC m=+148.227276131" watchObservedRunningTime="2026-01-27 15:52:05.840073881 +0000 UTC m=+148.229091404" Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.843328 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.843524 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.34349572 +0000 UTC m=+148.732513243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.843914 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.844332 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.344323044 +0000 UTC m=+148.733340567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.945349 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.945496 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.445479148 +0000 UTC m=+148.834496671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:05 crc kubenswrapper[4767]: I0127 15:52:05.945629 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:05 crc kubenswrapper[4767]: E0127 15:52:05.945934 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.445926201 +0000 UTC m=+148.834943724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.046672 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.546653543 +0000 UTC m=+148.935671066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.046672 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.046989 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.047326 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.547319232 +0000 UTC m=+148.936336755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.148349 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.148519 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.648492676 +0000 UTC m=+149.037510199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.148688 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.148990 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.648982401 +0000 UTC m=+149.037999924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.249410 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.249621 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.749605639 +0000 UTC m=+149.138623162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.249761 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.249879 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.250144 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.750136304 +0000 UTC m=+149.139153827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.256026 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.351212 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.351695 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.351723 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.351802 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.352996 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.852975427 +0000 UTC m=+149.241992950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.355478 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.357306 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.364215 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.372430 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.395084 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.455122 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.455514 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:06.955499021 +0000 UTC m=+149.344516544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.556804 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.557682 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.057663464 +0000 UTC m=+149.446680987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.628352 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" event={"ID":"331bbbbd-b003-4190-b8a6-149cc2b81b39","Type":"ContainerStarted","Data":"7c39978788c523000a57a340d5c227ad7ea08ef1c1798a973fa00d9a81ddb110"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.644758 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" event={"ID":"bb803c2c-ff0b-4f4a-a566-d0ca1957ce56","Type":"ContainerStarted","Data":"eeeeb76d7e777531b5c21ef0b67576a59f6bc5fbc53b45ed395c94b276e7580c"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.648827 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" event={"ID":"29ab3a2b-59d9-4e16-915f-f76e1d215929","Type":"ContainerStarted","Data":"5814f495bea8ca35c6fab4b8ee5e12dc619866d5c684fdd623dabca4ffeb981a"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.648829 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.659167 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" event={"ID":"762f91d9-714d-4ba5-8c0c-f64498897186","Type":"ContainerStarted","Data":"98c30b385b86f4dd91663ec853d0a1449a99eca5c38ff9876c179a7eecfb3abb"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.660731 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.661474 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.161457314 +0000 UTC m=+149.550474837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.690220 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-px962" event={"ID":"3580a3b5-6640-41c2-b61f-863c299c59c6","Type":"ContainerStarted","Data":"db29d5123c697ca3f52036dcb3ebfe251ba1c026ee67baafab1208f9ec60cb9b"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.691192 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-px962" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.702813 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.702886 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.706175 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" event={"ID":"8299442b-4dd3-4520-9e47-d461d0538647","Type":"ContainerStarted","Data":"84ec70a610eba89d8c7de4262d69a42bb74fab3fb01677d7bb164b8e737beed0"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.718555 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" event={"ID":"c2a37542-d13b-431e-a375-69e3fc2e90eb","Type":"ContainerStarted","Data":"cadff93c43d635555b49a9fd32a0b9777b338df6515d5e95b035355098f26e89"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.721544 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-px962" podStartSLOduration=8.721517686 podStartE2EDuration="8.721517686s" podCreationTimestamp="2026-01-27 15:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.718180249 +0000 UTC m=+149.107197772" watchObservedRunningTime="2026-01-27 15:52:06.721517686 +0000 UTC m=+149.110535209" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.723441 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" event={"ID":"9602d005-3eaf-4e35-a19b-a406036cc295","Type":"ContainerStarted","Data":"5518e6b289762a7f96e3faf7e2553175feb19bb7cbe9d4cade2bd176af332cb2"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.747389 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" podStartSLOduration=128.747364226 podStartE2EDuration="2m8.747364226s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.745662437 +0000 UTC m=+149.134679980" watchObservedRunningTime="2026-01-27 15:52:06.747364226 +0000 UTC m=+149.136381749" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.749536 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" event={"ID":"fd479a9b-8563-433e-aae2-ab0856594b3f","Type":"ContainerStarted","Data":"ca555e46292930f3db88e8d56293c7738c8055551180d1cd680585f48a3777c1"} Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750685 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cbltv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750727 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750727 4767 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v2v2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750776 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" podUID="0ea03516-b574-4e25-8f8f-b45c358b5295" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750966 4767 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tqzlw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.750991 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.751054 4767 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vqjkg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.751410 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" podUID="2a405d09-41d7-423a-a5d0-5413839ee40b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.751477 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.751496 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.767054 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.776870 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.276834701 +0000 UTC m=+149.665852214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.801515 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mbflz" podStartSLOduration=128.801465695 podStartE2EDuration="2m8.801465695s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.794350429 +0000 UTC m=+149.183367962" watchObservedRunningTime="2026-01-27 15:52:06.801465695 +0000 UTC m=+149.190483228" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.830253 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4vdpc" podStartSLOduration=128.830183598 podStartE2EDuration="2m8.830183598s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.826282625 +0000 UTC m=+149.215300148" watchObservedRunningTime="2026-01-27 15:52:06.830183598 +0000 UTC m=+149.219201121" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.855246 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hgzzw" podStartSLOduration=128.855226954 podStartE2EDuration="2m8.855226954s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.853232596 +0000 UTC m=+149.242250129" watchObservedRunningTime="2026-01-27 15:52:06.855226954 +0000 UTC m=+149.244244477" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.891440 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:06 crc kubenswrapper[4767]: E0127 15:52:06.893301 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.393286998 +0000 UTC m=+149.782304521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.941914 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vnr5s" podStartSLOduration=128.941878708 podStartE2EDuration="2m8.941878708s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.890714804 +0000 UTC m=+149.279732327" watchObservedRunningTime="2026-01-27 15:52:06.941878708 +0000 UTC m=+149.330896231" Jan 27 15:52:06 crc kubenswrapper[4767]: I0127 15:52:06.979541 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" podStartSLOduration=128.979512539 podStartE2EDuration="2m8.979512539s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:06.92402117 +0000 UTC m=+149.313038723" watchObservedRunningTime="2026-01-27 15:52:06.979512539 +0000 UTC m=+149.368530062" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.002966 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.003281 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.503250257 +0000 UTC m=+149.892267780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.003539 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.003880 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.503868615 +0000 UTC m=+149.892886138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: W0127 15:52:07.045119 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-130119e17bce9a0a6d60f30a8da65bbfae9fd5d732aa693f4de8069a5e053c9d WatchSource:0}: Error finding container 130119e17bce9a0a6d60f30a8da65bbfae9fd5d732aa693f4de8069a5e053c9d: Status 404 returned error can't find the container with id 130119e17bce9a0a6d60f30a8da65bbfae9fd5d732aa693f4de8069a5e053c9d Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.106859 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.107399 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.607378808 +0000 UTC m=+149.996396331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.209127 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.209597 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.709571521 +0000 UTC m=+150.098589044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.294415 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.310311 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.310716 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.810689274 +0000 UTC m=+150.199706797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.412230 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.412701 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:07.912682342 +0000 UTC m=+150.301699865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.514480 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.514674 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.014647719 +0000 UTC m=+150.403665242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.514815 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.515164 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.015151414 +0000 UTC m=+150.404168937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.615946 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.616092 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.116073471 +0000 UTC m=+150.505090994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.616256 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.616615 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.116605406 +0000 UTC m=+150.505622929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.641174 4767 csr.go:261] certificate signing request csr-vtmzc is approved, waiting to be issued Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.647597 4767 csr.go:257] certificate signing request csr-vtmzc is issued Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.699005 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:07 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:07 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:07 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.699077 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.717834 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.718031 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.217995456 +0000 UTC m=+150.607012989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.758170 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f502d120f911634ca1690d582bd7b527c024b8aee60bff4f76304052448b1d39"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.758242 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6ecace6a50d25fd3e611a0949660d9ef32bebd343e50dd463745a57140a54c51"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.760068 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1bbce2dc9b5fb7a02c7f811264ca8c2d8f5b84c6a750a84e526c31a644bc7f6b"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.760097 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ead81b451ced3fcd97ea299d7216b68b8d29e36496329d9eebfdc6ddaadd96df"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.760250 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.769981 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"db812271a187006c69775adb4dab3820267f6331edc8966b1fbce9e6cef96da7"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.770039 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"130119e17bce9a0a6d60f30a8da65bbfae9fd5d732aa693f4de8069a5e053c9d"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.773749 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" event={"ID":"6283b57b-899c-4d3d-b1a4-531a683d3853","Type":"ContainerStarted","Data":"d861488887c69656cfc8b32b9cf191528fcb4b3ce67f4accb970e93d592d8c56"} Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.774122 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.774891 4767 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cbltv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.774930 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.774952 4767 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v2v2x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.775009 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" podUID="0ea03516-b574-4e25-8f8f-b45c358b5295" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.820250 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.822411 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.322393494 +0000 UTC m=+150.711411087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.850194 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-fctcl" podStartSLOduration=129.85017032 podStartE2EDuration="2m9.85017032s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:07.849561262 +0000 UTC m=+150.238578785" watchObservedRunningTime="2026-01-27 15:52:07.85017032 +0000 UTC m=+150.239187843" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.867233 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" podStartSLOduration=128.867216374 podStartE2EDuration="2m8.867216374s" podCreationTimestamp="2026-01-27 15:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:07.864161135 +0000 UTC m=+150.253178658" watchObservedRunningTime="2026-01-27 15:52:07.867216374 +0000 UTC m=+150.256233887" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.922998 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.923402 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.423367663 +0000 UTC m=+150.812385186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.923472 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:07 crc kubenswrapper[4767]: E0127 15:52:07.923857 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.423843126 +0000 UTC m=+150.812860639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.933304 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gfdql" podStartSLOduration=129.93328147 podStartE2EDuration="2m9.93328147s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:07.917997857 +0000 UTC m=+150.307015390" watchObservedRunningTime="2026-01-27 15:52:07.93328147 +0000 UTC m=+150.322298993" Jan 27 15:52:07 crc kubenswrapper[4767]: I0127 15:52:07.990621 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" podStartSLOduration=129.990605553 podStartE2EDuration="2m9.990605553s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:07.983918739 +0000 UTC m=+150.372936262" watchObservedRunningTime="2026-01-27 15:52:07.990605553 +0000 UTC m=+150.379623076" Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.024334 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.024625 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.524605469 +0000 UTC m=+150.913622992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.125872 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.126393 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.62637316 +0000 UTC m=+151.015390683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.226710 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.227057 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.72703669 +0000 UTC m=+151.116054213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.328169 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.328573 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.828551444 +0000 UTC m=+151.217569027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.429262 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.429441 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.9294162 +0000 UTC m=+151.318433723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.429651 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.430003 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:08.929987696 +0000 UTC m=+151.319005219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.530636 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.530903 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.030879022 +0000 UTC m=+151.419896545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.530965 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.531288 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.031281844 +0000 UTC m=+151.420299367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.631994 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.632139 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.132118079 +0000 UTC m=+151.521135622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.632265 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.632615 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.132604413 +0000 UTC m=+151.521621936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.648805 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 15:47:07 +0000 UTC, rotation deadline is 2026-11-21 16:47:31.375079599 +0000 UTC Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.648855 4767 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7152h55m22.726228455s for next certificate rotation Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.704958 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:08 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:08 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:08 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.705023 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.732711 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.732906 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.232880801 +0000 UTC m=+151.621898324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.779299 4767 generic.go:334] "Generic (PLEG): container finished" podID="df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" containerID="7a5d02e78be533a25699f9ad1f67bf9596656a6db315be805ceacceb5b1f5507" exitCode=0 Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.779473 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" event={"ID":"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c","Type":"ContainerDied","Data":"7a5d02e78be533a25699f9ad1f67bf9596656a6db315be805ceacceb5b1f5507"} Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.834018 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.834529 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.334509209 +0000 UTC m=+151.723526802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.934665 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.934832 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.434808278 +0000 UTC m=+151.823825801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:08 crc kubenswrapper[4767]: I0127 15:52:08.934978 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:08 crc kubenswrapper[4767]: E0127 15:52:08.935357 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.435346823 +0000 UTC m=+151.824364346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.035815 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.036075 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.536028023 +0000 UTC m=+151.925045546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.036236 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.036804 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.536794346 +0000 UTC m=+151.925811869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.136887 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.137080 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.637048003 +0000 UTC m=+152.026065526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.137175 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.137528 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.637510887 +0000 UTC m=+152.026528420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.241473 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.241664 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.741640927 +0000 UTC m=+152.130658450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.241961 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.242331 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.742314816 +0000 UTC m=+152.131332339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.343506 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.343674 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.843652446 +0000 UTC m=+152.232669979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.343727 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.344061 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.844037397 +0000 UTC m=+152.233054930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.444575 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.444714 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.944696016 +0000 UTC m=+152.333713539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.444789 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.445080 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:09.945073157 +0000 UTC m=+152.334090680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.545694 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.545906 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.045875581 +0000 UTC m=+152.434893104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.546304 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.546680 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.046661944 +0000 UTC m=+152.435679467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.646901 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.647018 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.147002564 +0000 UTC m=+152.536020087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.647330 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.647671 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.147657833 +0000 UTC m=+152.536675376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.691864 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:09 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:09 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:09 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.691931 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.748691 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.748866 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.248840407 +0000 UTC m=+152.637857930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.748912 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.749237 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.249223929 +0000 UTC m=+152.638241452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.842153 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.842788 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.848750 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.849857 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.850037 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.350004661 +0000 UTC m=+152.739022184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.850077 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.850132 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.850427 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.855673 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.355652555 +0000 UTC m=+152.744670078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.856738 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.861459 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.953707 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.953880 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.453845743 +0000 UTC m=+152.842863266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.953917 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.953972 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.954044 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:09 crc kubenswrapper[4767]: I0127 15:52:09.954042 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:09 crc kubenswrapper[4767]: E0127 15:52:09.954401 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.454386749 +0000 UTC m=+152.843404282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.015989 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.065265 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.065885 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.565867912 +0000 UTC m=+152.954885425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.167444 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.167698 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.167880 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.667863581 +0000 UTC m=+153.056881114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.269822 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.270012 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.769987462 +0000 UTC m=+153.159004985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.270229 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.270531 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.770522408 +0000 UTC m=+153.159539931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.295386 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.374146 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume\") pod \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.374536 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume\") pod \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.374639 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.374739 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwbk\" (UniqueName: \"kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk\") pod \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\" (UID: \"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c\") " Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.378103 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" (UID: "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.378243 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.878226232 +0000 UTC m=+153.267243755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.387590 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk" (OuterVolumeSpecName: "kube-api-access-mrwbk") pod "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" (UID: "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c"). InnerVolumeSpecName "kube-api-access-mrwbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.390743 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" (UID: "df3e72cd-0745-4a8e-b3b5-25d23bccaa1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.476143 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.476306 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwbk\" (UniqueName: \"kubernetes.io/projected/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-kube-api-access-mrwbk\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.476321 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.476330 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.476608 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:10.976590185 +0000 UTC m=+153.365607708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.538213 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.578057 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.578460 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.078440069 +0000 UTC m=+153.467457592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.679529 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.679976 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.179961263 +0000 UTC m=+153.568978776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.695381 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:10 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:10 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:10 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.695460 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.715409 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l4l7n" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.780572 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.781647 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.281625742 +0000 UTC m=+153.670643275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.803375 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" event={"ID":"df3e72cd-0745-4a8e-b3b5-25d23bccaa1c","Type":"ContainerDied","Data":"6e02ea4813755b72eff5d622bdaca2c3a9c1cdf7ae5e71c7e5a460915de755a8"} Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.803421 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e02ea4813755b72eff5d622bdaca2c3a9c1cdf7ae5e71c7e5a460915de755a8" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.803499 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw" Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.809937 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b","Type":"ContainerStarted","Data":"e04b4d360c0f002bd0e7e78b15dbefe3bbb6ed0fc625794e05378704fe666596"} Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.882076 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.882798 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.382781716 +0000 UTC m=+153.771799249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:10 crc kubenswrapper[4767]: I0127 15:52:10.984297 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:10 crc kubenswrapper[4767]: E0127 15:52:10.984794 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.484756943 +0000 UTC m=+153.873774486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.085844 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.086266 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.586248567 +0000 UTC m=+153.975266090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.108906 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.109158 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" containerName="collect-profiles" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.109176 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" containerName="collect-profiles" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.109323 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" containerName="collect-profiles" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.110194 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.116225 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.138673 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.186778 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.187042 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.687000829 +0000 UTC m=+154.076018352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.187127 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.187439 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.187492 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.187580 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4kk\" (UniqueName: \"kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.187869 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.687841313 +0000 UTC m=+154.076859036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.277829 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.278957 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.282351 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.284070 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.284114 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.288775 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.289132 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.788901044 +0000 UTC m=+154.177918577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289354 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289469 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289494 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdfj\" (UniqueName: \"kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289518 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289565 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4kk\" (UniqueName: \"kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289590 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.289628 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.290469 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.290629 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.790611403 +0000 UTC m=+154.179629116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.290634 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.310048 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.313625 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.313879 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.313705 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.314356 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.329478 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.343767 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4kk\" (UniqueName: \"kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk\") pod \"community-operators-6v8jc\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.368082 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-64xhv" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.376324 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.376391 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.383073 4767 patch_prober.go:28] interesting pod/console-f9d7485db-vxkdk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.383140 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vxkdk" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.391633 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.391846 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdfj\" (UniqueName: \"kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.391992 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.392049 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.392901 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:11.892882459 +0000 UTC m=+154.281899982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.394244 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.394847 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.431691 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.457309 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdfj\" (UniqueName: \"kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj\") pod \"certified-operators-lbhhq\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.468840 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.470940 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.498654 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.499664 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.499755 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.500249 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcw7\" (UniqueName: \"kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.500342 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.500374 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.500397 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.508467 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.008442131 +0000 UTC m=+154.397459654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.513852 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.524170 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.524595 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.565520 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.604818 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605449 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605706 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605758 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcw7\" (UniqueName: \"kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605783 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605859 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.605878 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.606284 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.606381 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.106366421 +0000 UTC m=+154.495383944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.607878 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.619379 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.621544 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9tjmr" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.642134 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcw7\" (UniqueName: \"kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7\") pod \"community-operators-wm4cz\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.687335 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.688699 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.692322 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.699551 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:11 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:11 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:11 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.699610 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.702267 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.710288 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.710351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhp7k\" (UniqueName: \"kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.710416 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.712318 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.712356 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.712404 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.713125 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.713320 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.213306112 +0000 UTC m=+154.602323635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.749793 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.759297 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.824290 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.824767 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.824807 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhp7k\" (UniqueName: \"kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.824915 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.824988 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.324957571 +0000 UTC m=+154.713975124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.825304 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.825358 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.825518 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.851134 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b","Type":"ContainerStarted","Data":"be6dc48496f03a50604a9a3eff467c671e0ad30dae3924d3e5d4eb6ee5b6ea08"} Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.853668 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.853694 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.858418 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhp7k\" (UniqueName: \"kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k\") pod \"certified-operators-r7zcn\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.860335 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.873419 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.874951 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.883846 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v2v2x" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.904832 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vqjkg" Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.930999 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:11 crc kubenswrapper[4767]: E0127 15:52:11.934189 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.434175018 +0000 UTC m=+154.823192541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:11 crc kubenswrapper[4767]: I0127 15:52:11.970838 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.970816091 podStartE2EDuration="2.970816091s" podCreationTimestamp="2026-01-27 15:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:11.937361381 +0000 UTC m=+154.326378904" watchObservedRunningTime="2026-01-27 15:52:11.970816091 +0000 UTC m=+154.359833614" Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.002633 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.032475 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.033548 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.53353167 +0000 UTC m=+154.922549193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.047631 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.134105 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.134456 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.634440957 +0000 UTC m=+155.023458470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.236626 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.236837 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.736794815 +0000 UTC m=+155.125812338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.237556 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.237906 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.737890097 +0000 UTC m=+155.126907620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.340635 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.341009 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.840983957 +0000 UTC m=+155.230001480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.449232 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.449682 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:12.949662529 +0000 UTC m=+155.338680102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.555737 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.556012 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.055978173 +0000 UTC m=+155.444995696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.556675 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.557142 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.057128816 +0000 UTC m=+155.446146339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.564318 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.581259 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.660650 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.661037 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.661407 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.16139083 +0000 UTC m=+155.550408353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.697440 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:12 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:12 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:12 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.697511 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.764555 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.764853 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.26484107 +0000 UTC m=+155.653858593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.807107 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.867798 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.868151 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.368132056 +0000 UTC m=+155.757149589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.883065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerStarted","Data":"2d7ad657e944ff882ae60befe785254dea482a6b19ae2658395b07c5f79bf371"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.888831 4767 generic.go:334] "Generic (PLEG): container finished" podID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerID="ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c" exitCode=0 Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.888894 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerDied","Data":"ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.888920 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerStarted","Data":"faf0e1f3c7c9b040b4709e3b739f7b82d8ef980792b1a6bfeef73bd6f101f689"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.897192 4767 generic.go:334] "Generic (PLEG): container finished" podID="fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" containerID="be6dc48496f03a50604a9a3eff467c671e0ad30dae3924d3e5d4eb6ee5b6ea08" exitCode=0 Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.902359 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b","Type":"ContainerDied","Data":"be6dc48496f03a50604a9a3eff467c671e0ad30dae3924d3e5d4eb6ee5b6ea08"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.902906 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.921383 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerStarted","Data":"3bc3250ad0e1f805e03d662a12a603e79165b9180662501c189df47212d1d88d"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.923777 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49403f5b-925d-44e0-b168-5aeed908af4e","Type":"ContainerStarted","Data":"135d216f8e302fa51abd81af5b1405509d1d2b57fb6aa9f5ddf4fafb3ba45df4"} Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.941000 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq5r" Jan 27 15:52:12 crc kubenswrapper[4767]: I0127 15:52:12.969568 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:12 crc kubenswrapper[4767]: E0127 15:52:12.970815 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.470803384 +0000 UTC m=+155.859820907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.000582 4767 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d7nhv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]log ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]etcd ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/max-in-flight-filter ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 15:52:13 crc kubenswrapper[4767]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 15:52:13 crc kubenswrapper[4767]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 15:52:13 crc kubenswrapper[4767]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 15:52:13 crc kubenswrapper[4767]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 15:52:13 crc kubenswrapper[4767]: livez check failed Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.000659 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" podUID="c2a37542-d13b-431e-a375-69e3fc2e90eb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.071743 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.073081 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.57306133 +0000 UTC m=+155.962078853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.173562 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.174030 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.674010818 +0000 UTC m=+156.063028341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.263876 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.264875 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.266969 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.274385 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.274536 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.774512023 +0000 UTC m=+156.163529546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.274678 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.275085 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.775069429 +0000 UTC m=+156.164086952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.288576 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.375583 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.375755 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.875715658 +0000 UTC m=+156.264733181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.375886 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmz2\" (UniqueName: \"kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.375914 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.375935 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.376060 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.376364 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.876354826 +0000 UTC m=+156.265372349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.411431 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.477110 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.477332 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.977301894 +0000 UTC m=+156.366319417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.477417 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.477506 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.477931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmz2\" (UniqueName: \"kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.477963 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.478054 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:13.978035715 +0000 UTC m=+156.367053238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.478392 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.478651 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.511941 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmz2\" (UniqueName: \"kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2\") pod \"redhat-marketplace-6pz42\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.579049 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.579370 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.079325423 +0000 UTC m=+156.468342956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.581821 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.661677 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.669897 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.676339 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.692110 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.692392 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.192380062 +0000 UTC m=+156.581397585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.696330 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:13 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:13 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:13 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.696389 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.801136 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.801614 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.801834 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.801879 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqznc\" (UniqueName: \"kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.802053 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.302019072 +0000 UTC m=+156.691036595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.835997 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903325 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903371 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqznc\" (UniqueName: \"kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903398 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:13 crc kubenswrapper[4767]: E0127 15:52:13.903726 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.403712562 +0000 UTC m=+156.792730085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903917 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.903988 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.924999 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqznc\" (UniqueName: \"kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc\") pod \"redhat-marketplace-7pmbd\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.931618 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerStarted","Data":"c6212406b850756b2b6613f66dd05f5e4a4b5de51d2b71b5e0124b3288e999c8"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.933481 4767 generic.go:334] "Generic (PLEG): container finished" podID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerID="88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f" exitCode=0 Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.933540 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerDied","Data":"88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.938550 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" event={"ID":"6283b57b-899c-4d3d-b1a4-531a683d3853","Type":"ContainerStarted","Data":"78432f32ca0cd61527f3928ee22a7b814a5c67d63824955e15b068b5825c60c6"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.940319 4767 generic.go:334] "Generic (PLEG): container finished" podID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerID="8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525" exitCode=0 Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.940403 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerDied","Data":"8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.940428 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerStarted","Data":"8a2af7d588d12012fb09138392caf0db63080be1ecd4b252324dc843e553d0ff"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.942621 4767 generic.go:334] "Generic (PLEG): container finished" podID="eabb94a2-a935-40be-a094-1a71d904b222" containerID="0d9d05b61dd42b5b0a979f260efc5b9b7728ebf6e39ad4726422953386c24d6e" exitCode=0 Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.942698 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerDied","Data":"0d9d05b61dd42b5b0a979f260efc5b9b7728ebf6e39ad4726422953386c24d6e"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.945279 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49403f5b-925d-44e0-b168-5aeed908af4e","Type":"ContainerStarted","Data":"8c07ab0086f6e4e3c3b30666b9aed8293855499230ff19a109fb9c19e3add5c8"} Jan 27 15:52:13 crc kubenswrapper[4767]: I0127 15:52:13.975535 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.975502614 podStartE2EDuration="2.975502614s" podCreationTimestamp="2026-01-27 15:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:13.97330026 +0000 UTC m=+156.362317783" watchObservedRunningTime="2026-01-27 15:52:13.975502614 +0000 UTC m=+156.364520137" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.004519 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.004697 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.50467254 +0000 UTC m=+156.893690063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.004773 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.005193 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.505176274 +0000 UTC m=+156.894193797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.024422 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.106418 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.106627 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.606594736 +0000 UTC m=+156.995612269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.108126 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.108649 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.608633365 +0000 UTC m=+156.997650888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.167055 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.209394 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.209691 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.709648905 +0000 UTC m=+157.098666428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.261695 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.261947 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" containerName="pruner" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.261960 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" containerName="pruner" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.262085 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" containerName="pruner" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.263022 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.267559 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.290216 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.310717 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir\") pod \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.310766 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access\") pod \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\" (UID: \"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.311156 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.311466 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.811451787 +0000 UTC m=+157.200469310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.312397 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" (UID: "fb0e1b4f-3732-43df-b2ee-d91066f7fb1b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.320047 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb0e1b4f-3732-43df-b2ee-d91066f7fb1b" (UID: "fb0e1b4f-3732-43df-b2ee-d91066f7fb1b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.362089 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.412774 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.412911 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.912892469 +0000 UTC m=+157.301909992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413125 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldlg5\" (UniqueName: \"kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413253 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413291 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413320 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413371 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.413381 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb0e1b4f-3732-43df-b2ee-d91066f7fb1b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:14 crc kubenswrapper[4767]: E0127 15:52:14.413856 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 15:52:14.913827027 +0000 UTC m=+157.302844730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f4kgp" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.427381 4767 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 15:52:14 crc kubenswrapper[4767]: W0127 15:52:14.435034 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f8f2c5_51fc_4707_903f_fef9c5f133c5.slice/crio-09b45ca304ced8abef2827abfa263e2db0e60eb45232396068773db79ac118dd WatchSource:0}: Error finding container 09b45ca304ced8abef2827abfa263e2db0e60eb45232396068773db79ac118dd: Status 404 returned error can't find the container with id 09b45ca304ced8abef2827abfa263e2db0e60eb45232396068773db79ac118dd Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.477021 4767 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T15:52:14.427411221Z","Handler":null,"Name":""} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.482054 4767 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.482093 4767 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.514438 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.515096 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.515173 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.515260 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldlg5\" (UniqueName: \"kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.515605 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.515731 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.518865 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.535240 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldlg5\" (UniqueName: \"kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5\") pod \"redhat-operators-bnmj9\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.594390 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.619510 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.623719 4767 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.623783 4767 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.646757 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f4kgp\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.653863 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.654906 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.667439 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.693006 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:14 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:14 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:14 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.693073 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.813782 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.822649 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.822731 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2nbg\" (UniqueName: \"kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.822773 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: W0127 15:52:14.823428 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69b7edc7_f8c2_4e0e_923c_b5a3395ae14d.slice/crio-0e2e45e64e8e2596ccc78c7ca6d94a21d6f194da9c3c2b01814203763881fef8 WatchSource:0}: Error finding container 0e2e45e64e8e2596ccc78c7ca6d94a21d6f194da9c3c2b01814203763881fef8: Status 404 returned error can't find the container with id 0e2e45e64e8e2596ccc78c7ca6d94a21d6f194da9c3c2b01814203763881fef8 Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.901677 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.924504 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2nbg\" (UniqueName: \"kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.924557 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.924650 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.925151 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.925458 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.943490 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2nbg\" (UniqueName: \"kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg\") pod \"redhat-operators-7nshp\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.957810 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fb0e1b4f-3732-43df-b2ee-d91066f7fb1b","Type":"ContainerDied","Data":"e04b4d360c0f002bd0e7e78b15dbefe3bbb6ed0fc625794e05378704fe666596"} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.957857 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e04b4d360c0f002bd0e7e78b15dbefe3bbb6ed0fc625794e05378704fe666596" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.957831 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.963001 4767 generic.go:334] "Generic (PLEG): container finished" podID="49403f5b-925d-44e0-b168-5aeed908af4e" containerID="8c07ab0086f6e4e3c3b30666b9aed8293855499230ff19a109fb9c19e3add5c8" exitCode=0 Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.963082 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49403f5b-925d-44e0-b168-5aeed908af4e","Type":"ContainerDied","Data":"8c07ab0086f6e4e3c3b30666b9aed8293855499230ff19a109fb9c19e3add5c8"} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.966570 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerStarted","Data":"09b45ca304ced8abef2827abfa263e2db0e60eb45232396068773db79ac118dd"} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.968023 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.971602 4767 generic.go:334] "Generic (PLEG): container finished" podID="53c82776-5f8d-496e-a045-428e96b9f87c" containerID="09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4" exitCode=0 Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.971682 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerDied","Data":"09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4"} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.976291 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" event={"ID":"6283b57b-899c-4d3d-b1a4-531a683d3853","Type":"ContainerStarted","Data":"9712051392db39fac33e87fa9d2ce230fde082b5cfd372eb43614be0a55ed365"} Jan 27 15:52:14 crc kubenswrapper[4767]: I0127 15:52:14.978084 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerStarted","Data":"0e2e45e64e8e2596ccc78c7ca6d94a21d6f194da9c3c2b01814203763881fef8"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.173016 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:52:15 crc kubenswrapper[4767]: W0127 15:52:15.183119 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84510a56_8f29_404f_b5eb_c7433db1de6b.slice/crio-704c876a572f6a84bda36cb1dd8099990bbbe8793f99d67771c1d18033ed6126 WatchSource:0}: Error finding container 704c876a572f6a84bda36cb1dd8099990bbbe8793f99d67771c1d18033ed6126: Status 404 returned error can't find the container with id 704c876a572f6a84bda36cb1dd8099990bbbe8793f99d67771c1d18033ed6126 Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.298377 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:52:15 crc kubenswrapper[4767]: W0127 15:52:15.310334 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c067093_6c7c_47fb_bcc6_d50bba65fe78.slice/crio-f50a8c385aa358ca0b45e567c3e3cdf04ade8f8a11e8d2dcb072bf4f778d2cbb WatchSource:0}: Error finding container f50a8c385aa358ca0b45e567c3e3cdf04ade8f8a11e8d2dcb072bf4f778d2cbb: Status 404 returned error can't find the container with id f50a8c385aa358ca0b45e567c3e3cdf04ade8f8a11e8d2dcb072bf4f778d2cbb Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.692545 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:15 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:15 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:15 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.692790 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.985450 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" event={"ID":"6283b57b-899c-4d3d-b1a4-531a683d3853","Type":"ContainerStarted","Data":"5fc43f4160a3afb5ccf3d3da71c490b2cdbc69ab5679a7a109513a8f8164315f"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.990685 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerDied","Data":"daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.990705 4767 generic.go:334] "Generic (PLEG): container finished" podID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerID="daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10" exitCode=0 Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.990774 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerStarted","Data":"704c876a572f6a84bda36cb1dd8099990bbbe8793f99d67771c1d18033ed6126"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.993866 4767 generic.go:334] "Generic (PLEG): container finished" podID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerID="7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129" exitCode=0 Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.993965 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerDied","Data":"7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.996768 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" event={"ID":"5c067093-6c7c-47fb-bcc6-d50bba65fe78","Type":"ContainerStarted","Data":"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.996797 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" event={"ID":"5c067093-6c7c-47fb-bcc6-d50bba65fe78","Type":"ContainerStarted","Data":"f50a8c385aa358ca0b45e567c3e3cdf04ade8f8a11e8d2dcb072bf4f778d2cbb"} Jan 27 15:52:15 crc kubenswrapper[4767]: I0127 15:52:15.999669 4767 generic.go:334] "Generic (PLEG): container finished" podID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerID="362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb" exitCode=0 Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.000271 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerDied","Data":"362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb"} Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.287962 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.294310 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d7nhv" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.307395 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.365891 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.448377 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access\") pod \"49403f5b-925d-44e0-b168-5aeed908af4e\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.448464 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir\") pod \"49403f5b-925d-44e0-b168-5aeed908af4e\" (UID: \"49403f5b-925d-44e0-b168-5aeed908af4e\") " Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.450688 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49403f5b-925d-44e0-b168-5aeed908af4e" (UID: "49403f5b-925d-44e0-b168-5aeed908af4e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.457562 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49403f5b-925d-44e0-b168-5aeed908af4e" (UID: "49403f5b-925d-44e0-b168-5aeed908af4e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.550428 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49403f5b-925d-44e0-b168-5aeed908af4e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.550470 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49403f5b-925d-44e0-b168-5aeed908af4e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.694340 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:16 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:16 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:16 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.694410 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:16 crc kubenswrapper[4767]: I0127 15:52:16.898334 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-px962" Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.018929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49403f5b-925d-44e0-b168-5aeed908af4e","Type":"ContainerDied","Data":"135d216f8e302fa51abd81af5b1405509d1d2b57fb6aa9f5ddf4fafb3ba45df4"} Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.019180 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135d216f8e302fa51abd81af5b1405509d1d2b57fb6aa9f5ddf4fafb3ba45df4" Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.019120 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.047459 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5pb8t" podStartSLOduration=19.047423439 podStartE2EDuration="19.047423439s" podCreationTimestamp="2026-01-27 15:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:17.03954151 +0000 UTC m=+159.428559033" watchObservedRunningTime="2026-01-27 15:52:17.047423439 +0000 UTC m=+159.436440962" Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.069348 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" podStartSLOduration=139.069326674 podStartE2EDuration="2m19.069326674s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:52:17.065572685 +0000 UTC m=+159.454590208" watchObservedRunningTime="2026-01-27 15:52:17.069326674 +0000 UTC m=+159.458344207" Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.692815 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:17 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:17 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:17 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:17 crc kubenswrapper[4767]: I0127 15:52:17.693069 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:18 crc kubenswrapper[4767]: I0127 15:52:18.692541 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:18 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:18 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:18 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:18 crc kubenswrapper[4767]: I0127 15:52:18.692656 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:19 crc kubenswrapper[4767]: I0127 15:52:19.692251 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:19 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:19 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:19 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:19 crc kubenswrapper[4767]: I0127 15:52:19.692638 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:20 crc kubenswrapper[4767]: I0127 15:52:20.692592 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:20 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:20 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:20 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:20 crc kubenswrapper[4767]: I0127 15:52:20.692666 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.312586 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.312630 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.312652 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.312715 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.375985 4767 patch_prober.go:28] interesting pod/console-f9d7485db-vxkdk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.376419 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vxkdk" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.671832 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.680039 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03660290-055d-4f50-be45-3d6d9c023b34-metrics-certs\") pod \"network-metrics-daemon-r296r\" (UID: \"03660290-055d-4f50-be45-3d6d9c023b34\") " pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.682527 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r296r" Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.692899 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:21 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:21 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:21 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:21 crc kubenswrapper[4767]: I0127 15:52:21.692993 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:22 crc kubenswrapper[4767]: I0127 15:52:22.691956 4767 patch_prober.go:28] interesting pod/router-default-5444994796-4n6ch container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 15:52:22 crc kubenswrapper[4767]: [-]has-synced failed: reason withheld Jan 27 15:52:22 crc kubenswrapper[4767]: [+]process-running ok Jan 27 15:52:22 crc kubenswrapper[4767]: healthz check failed Jan 27 15:52:22 crc kubenswrapper[4767]: I0127 15:52:22.692029 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4n6ch" podUID="333657b1-ebc6-4900-93eb-7762fd0eeaac" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 15:52:23 crc kubenswrapper[4767]: I0127 15:52:23.693809 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:23 crc kubenswrapper[4767]: I0127 15:52:23.697194 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4n6ch" Jan 27 15:52:24 crc kubenswrapper[4767]: I0127 15:52:24.858181 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:52:24 crc kubenswrapper[4767]: I0127 15:52:24.858626 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:52:24 crc kubenswrapper[4767]: I0127 15:52:24.902149 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:28 crc kubenswrapper[4767]: I0127 15:52:28.861264 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r296r"] Jan 27 15:52:30 crc kubenswrapper[4767]: I0127 15:52:30.113273 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:52:30 crc kubenswrapper[4767]: I0127 15:52:30.113906 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" containerID="cri-o://721bdd3be33645608399716428d5c6efade9c188c0d547618c1141db7d4a606e" gracePeriod=30 Jan 27 15:52:30 crc kubenswrapper[4767]: I0127 15:52:30.118017 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:52:30 crc kubenswrapper[4767]: I0127 15:52:30.118505 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" containerID="cri-o://26198e480ae52e3c31055d523eee5ce991004cd80a99480be6c5e5b9fd089f55" gracePeriod=30 Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.174880 4767 generic.go:334] "Generic (PLEG): container finished" podID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerID="26198e480ae52e3c31055d523eee5ce991004cd80a99480be6c5e5b9fd089f55" exitCode=0 Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.175316 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" event={"ID":"34c3a00d-6b69-4790-ba95-29ae01dd296f","Type":"ContainerDied","Data":"26198e480ae52e3c31055d523eee5ce991004cd80a99480be6c5e5b9fd089f55"} Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.177511 4767 generic.go:334] "Generic (PLEG): container finished" podID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerID="721bdd3be33645608399716428d5c6efade9c188c0d547618c1141db7d4a606e" exitCode=0 Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.177555 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" event={"ID":"9d9edf4c-6df3-484c-9bb7-a344d8147aa6","Type":"ContainerDied","Data":"721bdd3be33645608399716428d5c6efade9c188c0d547618c1141db7d4a606e"} Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.312509 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.312552 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.312654 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.312742 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.312787 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.313217 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.313267 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.313321 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cbf24be75441564d52bf7e30a64cc331d4ca89c0fcc7e0dc90b39ede7cb56550"} pod="openshift-console/downloads-7954f5f757-ksqxd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.313402 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" containerID="cri-o://cbf24be75441564d52bf7e30a64cc331d4ca89c0fcc7e0dc90b39ede7cb56550" gracePeriod=2 Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.325989 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.326053 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.388954 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.392397 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.576598 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:52:31 crc kubenswrapper[4767]: I0127 15:52:31.576663 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:52:32 crc kubenswrapper[4767]: I0127 15:52:32.182702 4767 generic.go:334] "Generic (PLEG): container finished" podID="25e39933-042b-46a8-9e96-19acb0944e08" containerID="cbf24be75441564d52bf7e30a64cc331d4ca89c0fcc7e0dc90b39ede7cb56550" exitCode=0 Jan 27 15:52:32 crc kubenswrapper[4767]: I0127 15:52:32.182784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksqxd" event={"ID":"25e39933-042b-46a8-9e96-19acb0944e08","Type":"ContainerDied","Data":"cbf24be75441564d52bf7e30a64cc331d4ca89c0fcc7e0dc90b39ede7cb56550"} Jan 27 15:52:34 crc kubenswrapper[4767]: I0127 15:52:34.908113 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.313795 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.314424 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.325432 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.325536 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.577036 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 27 15:52:41 crc kubenswrapper[4767]: I0127 15:52:41.577104 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 27 15:52:42 crc kubenswrapper[4767]: I0127 15:52:42.099103 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6mgl" Jan 27 15:52:45 crc kubenswrapper[4767]: I0127 15:52:45.254926 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r296r" event={"ID":"03660290-055d-4f50-be45-3d6d9c023b34","Type":"ContainerStarted","Data":"2d90c41e9d1bcffbef11bd10127170759cb746aeaa81b8b5f4086ff524ff4406"} Jan 27 15:52:46 crc kubenswrapper[4767]: I0127 15:52:46.534886 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 15:52:47 crc kubenswrapper[4767]: E0127 15:52:47.197836 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:52:47 crc kubenswrapper[4767]: E0127 15:52:47.198286 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ms4kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6v8jc_openshift-marketplace(b45a028d-9f8c-4090-985b-e7ddf929554c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:52:47 crc kubenswrapper[4767]: E0127 15:52:47.199677 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6v8jc" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.315465 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.316429 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.388367 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:52:51 crc kubenswrapper[4767]: E0127 15:52:51.388596 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49403f5b-925d-44e0-b168-5aeed908af4e" containerName="pruner" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.388608 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="49403f5b-925d-44e0-b168-5aeed908af4e" containerName="pruner" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.388700 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="49403f5b-925d-44e0-b168-5aeed908af4e" containerName="pruner" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.389054 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.391078 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.391289 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.399373 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:52:51 crc kubenswrapper[4767]: E0127 15:52:51.453936 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 15:52:51 crc kubenswrapper[4767]: E0127 15:52:51.454124 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtcw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wm4cz_openshift-marketplace(eabb94a2-a935-40be-a094-1a71d904b222): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:52:51 crc kubenswrapper[4767]: E0127 15:52:51.455309 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wm4cz" podUID="eabb94a2-a935-40be-a094-1a71d904b222" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.498034 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.498154 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.599534 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.599647 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.599710 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.618583 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: I0127 15:52:51.736521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:52:51 crc kubenswrapper[4767]: E0127 15:52:51.779120 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6v8jc" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" Jan 27 15:52:52 crc kubenswrapper[4767]: I0127 15:52:52.325019 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:52:52 crc kubenswrapper[4767]: I0127 15:52:52.325088 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:52:52 crc kubenswrapper[4767]: I0127 15:52:52.577194 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:52:52 crc kubenswrapper[4767]: I0127 15:52:52.577283 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:52:52 crc kubenswrapper[4767]: E0127 15:52:52.674048 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 15:52:52 crc kubenswrapper[4767]: E0127 15:52:52.674238 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldlg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bnmj9_openshift-marketplace(69b7edc7-f8c2-4e0e-923c-b5a3395ae14d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:52:52 crc kubenswrapper[4767]: E0127 15:52:52.675399 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bnmj9" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" Jan 27 15:52:54 crc kubenswrapper[4767]: I0127 15:52:54.857569 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:52:54 crc kubenswrapper[4767]: I0127 15:52:54.857956 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.384950 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.385779 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.394089 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.460698 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.460750 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.460837 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.562228 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.562274 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.562314 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.562359 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.562451 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.584625 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access\") pod \"installer-9-crc\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:52:56 crc kubenswrapper[4767]: I0127 15:52:56.712705 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:53:01 crc kubenswrapper[4767]: I0127 15:53:01.312876 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:01 crc kubenswrapper[4767]: I0127 15:53:01.313308 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.325333 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.326440 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.576678 4767 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7m254 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.577070 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.807233 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.854856 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:53:02 crc kubenswrapper[4767]: E0127 15:53:02.855223 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.855246 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.855464 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" containerName="controller-manager" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.856151 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.861084 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957453 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles\") pod \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957524 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config\") pod \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957558 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca\") pod \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957595 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kfcc\" (UniqueName: \"kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc\") pod \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957624 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert\") pod \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\" (UID: \"9d9edf4c-6df3-484c-9bb7-a344d8147aa6\") " Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957826 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957863 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmldc\" (UniqueName: \"kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957893 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957934 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.957956 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.958633 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d9edf4c-6df3-484c-9bb7-a344d8147aa6" (UID: "9d9edf4c-6df3-484c-9bb7-a344d8147aa6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.959186 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config" (OuterVolumeSpecName: "config") pod "9d9edf4c-6df3-484c-9bb7-a344d8147aa6" (UID: "9d9edf4c-6df3-484c-9bb7-a344d8147aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.959769 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d9edf4c-6df3-484c-9bb7-a344d8147aa6" (UID: "9d9edf4c-6df3-484c-9bb7-a344d8147aa6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.964954 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc" (OuterVolumeSpecName: "kube-api-access-4kfcc") pod "9d9edf4c-6df3-484c-9bb7-a344d8147aa6" (UID: "9d9edf4c-6df3-484c-9bb7-a344d8147aa6"). InnerVolumeSpecName "kube-api-access-4kfcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:53:02 crc kubenswrapper[4767]: I0127 15:53:02.965812 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d9edf4c-6df3-484c-9bb7-a344d8147aa6" (UID: "9d9edf4c-6df3-484c-9bb7-a344d8147aa6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058720 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058775 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmldc\" (UniqueName: \"kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058808 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058858 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058879 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058926 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058939 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.058951 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.059048 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kfcc\" (UniqueName: \"kubernetes.io/projected/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-kube-api-access-4kfcc\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.059174 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9edf4c-6df3-484c-9bb7-a344d8147aa6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.060022 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.060134 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.060358 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.062250 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.079880 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmldc\" (UniqueName: \"kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc\") pod \"controller-manager-66b6c8bc98-th2g2\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.171537 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.342929 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" event={"ID":"9d9edf4c-6df3-484c-9bb7-a344d8147aa6","Type":"ContainerDied","Data":"703f253827bdfdf33b79d0813a0da80faf5e0b5a00a0e21d64ea357858f3b3e0"} Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.343022 4767 scope.go:117] "RemoveContainer" containerID="721bdd3be33645608399716428d5c6efade9c188c0d547618c1141db7d4a606e" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.343044 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7m254" Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.375888 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:53:03 crc kubenswrapper[4767]: I0127 15:53:03.379732 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7m254"] Jan 27 15:53:04 crc kubenswrapper[4767]: I0127 15:53:04.335989 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9edf4c-6df3-484c-9bb7-a344d8147aa6" path="/var/lib/kubelet/pods/9d9edf4c-6df3-484c-9bb7-a344d8147aa6/volumes" Jan 27 15:53:04 crc kubenswrapper[4767]: E0127 15:53:04.581732 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 15:53:04 crc kubenswrapper[4767]: E0127 15:53:04.581923 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2nbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7nshp_openshift-marketplace(84510a56-8f29-404f-b5eb-c7433db1de6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:53:04 crc kubenswrapper[4767]: E0127 15:53:04.583915 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7nshp" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" Jan 27 15:53:11 crc kubenswrapper[4767]: E0127 15:53:11.148600 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7nshp" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" Jan 27 15:53:11 crc kubenswrapper[4767]: I0127 15:53:11.312494 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:11 crc kubenswrapper[4767]: I0127 15:53:11.312561 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:12 crc kubenswrapper[4767]: I0127 15:53:12.325341 4767 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t67t2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:53:12 crc kubenswrapper[4767]: I0127 15:53:12.325434 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:53:14 crc kubenswrapper[4767]: E0127 15:53:14.129979 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:53:14 crc kubenswrapper[4767]: E0127 15:53:14.130390 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tmz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6pz42_openshift-marketplace(53c82776-5f8d-496e-a045-428e96b9f87c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:53:14 crc kubenswrapper[4767]: E0127 15:53:14.131590 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6pz42" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.409390 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" event={"ID":"34c3a00d-6b69-4790-ba95-29ae01dd296f","Type":"ContainerDied","Data":"4ead8b43fba68d45b2586a86ffcde833e7eb6b061e2a7ec5cecae34437f37a15"} Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.409995 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ead8b43fba68d45b2586a86ffcde833e7eb6b061e2a7ec5cecae34437f37a15" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.421608 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.446603 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:53:15 crc kubenswrapper[4767]: E0127 15:53:15.446799 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.446810 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.446906 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" containerName="route-controller-manager" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.447294 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.468773 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540048 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czwwg\" (UniqueName: \"kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg\") pod \"34c3a00d-6b69-4790-ba95-29ae01dd296f\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540293 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config\") pod \"34c3a00d-6b69-4790-ba95-29ae01dd296f\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540438 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert\") pod \"34c3a00d-6b69-4790-ba95-29ae01dd296f\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540493 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca\") pod \"34c3a00d-6b69-4790-ba95-29ae01dd296f\" (UID: \"34c3a00d-6b69-4790-ba95-29ae01dd296f\") " Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540661 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnb2\" (UniqueName: \"kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540688 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540727 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.540744 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.544853 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca" (OuterVolumeSpecName: "client-ca") pod "34c3a00d-6b69-4790-ba95-29ae01dd296f" (UID: "34c3a00d-6b69-4790-ba95-29ae01dd296f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.545404 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg" (OuterVolumeSpecName: "kube-api-access-czwwg") pod "34c3a00d-6b69-4790-ba95-29ae01dd296f" (UID: "34c3a00d-6b69-4790-ba95-29ae01dd296f"). InnerVolumeSpecName "kube-api-access-czwwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.545512 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config" (OuterVolumeSpecName: "config") pod "34c3a00d-6b69-4790-ba95-29ae01dd296f" (UID: "34c3a00d-6b69-4790-ba95-29ae01dd296f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.549446 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34c3a00d-6b69-4790-ba95-29ae01dd296f" (UID: "34c3a00d-6b69-4790-ba95-29ae01dd296f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642360 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642407 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnb2\" (UniqueName: \"kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642522 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642568 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czwwg\" (UniqueName: \"kubernetes.io/projected/34c3a00d-6b69-4790-ba95-29ae01dd296f-kube-api-access-czwwg\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642581 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642592 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34c3a00d-6b69-4790-ba95-29ae01dd296f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.642603 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34c3a00d-6b69-4790-ba95-29ae01dd296f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.643750 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.644657 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.649067 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.662247 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnb2\" (UniqueName: \"kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2\") pod \"route-controller-manager-8bb5d5478-tkjp2\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.840869 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.868244 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:53:15 crc kubenswrapper[4767]: W0127 15:53:15.873549 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc32bc6a2_2754_4fff_8018_f9791b6a8ced.slice/crio-d88068125e60dccdaa9495c251e6191ab701e8156bcbef6bcef97c0179d474da WatchSource:0}: Error finding container d88068125e60dccdaa9495c251e6191ab701e8156bcbef6bcef97c0179d474da: Status 404 returned error can't find the container with id d88068125e60dccdaa9495c251e6191ab701e8156bcbef6bcef97c0179d474da Jan 27 15:53:15 crc kubenswrapper[4767]: E0127 15:53:15.882210 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 15:53:15 crc kubenswrapper[4767]: E0127 15:53:15.882636 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhp7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r7zcn_openshift-marketplace(0e3e0a9a-9b2b-4cf4-9f92-847e870be858): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:53:15 crc kubenswrapper[4767]: E0127 15:53:15.883771 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r7zcn" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.899873 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 15:53:15 crc kubenswrapper[4767]: I0127 15:53:15.958165 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 15:53:15 crc kubenswrapper[4767]: W0127 15:53:15.974982 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod696728d7_87d8_4e30_a896_472a5b86d1ca.slice/crio-946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a WatchSource:0}: Error finding container 946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a: Status 404 returned error can't find the container with id 946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.040920 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:53:16 crc kubenswrapper[4767]: W0127 15:53:16.057282 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c86e5b5_fabb_4448_85d3_44fbb1addf8a.slice/crio-7902a208df824ff001a685786c70a4b4e41b01948ad40dfd4d193ec0ce0b4c8d WatchSource:0}: Error finding container 7902a208df824ff001a685786c70a4b4e41b01948ad40dfd4d193ec0ce0b4c8d: Status 404 returned error can't find the container with id 7902a208df824ff001a685786c70a4b4e41b01948ad40dfd4d193ec0ce0b4c8d Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.436560 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"696728d7-87d8-4e30-a896-472a5b86d1ca","Type":"ContainerStarted","Data":"e99af25e14750be6befe3efd59265293cdf70317fe2d422be204edb374b1a229"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.436944 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"696728d7-87d8-4e30-a896-472a5b86d1ca","Type":"ContainerStarted","Data":"946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.438927 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" event={"ID":"9c86e5b5-fabb-4448-85d3-44fbb1addf8a","Type":"ContainerStarted","Data":"69a828043105f2b295468b0d6c4ab84751b2d56cdfe03f84fefd9330db00970a"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.438961 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" event={"ID":"9c86e5b5-fabb-4448-85d3-44fbb1addf8a","Type":"ContainerStarted","Data":"7902a208df824ff001a685786c70a4b4e41b01948ad40dfd4d193ec0ce0b4c8d"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.440630 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r296r" event={"ID":"03660290-055d-4f50-be45-3d6d9c023b34","Type":"ContainerStarted","Data":"8675f7a941ca3586846e3669e6787e9624cd4fe65bef29750f474d8a6ec6e9ae"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.443977 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksqxd" event={"ID":"25e39933-042b-46a8-9e96-19acb0944e08","Type":"ContainerStarted","Data":"bbac9c38e8ed8c182fae0681643ed15fea9d8d3d242d600ebbaad0c2944a05b0"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.445091 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.448801 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.448854 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.450877 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" event={"ID":"c32bc6a2-2754-4fff-8018-f9791b6a8ced","Type":"ContainerStarted","Data":"f571edc4d17e9fb98acd654b22022dd3c3a0c216fe8d4df68af4775d4d0e0a41"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.450920 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" event={"ID":"c32bc6a2-2754-4fff-8018-f9791b6a8ced","Type":"ContainerStarted","Data":"d88068125e60dccdaa9495c251e6191ab701e8156bcbef6bcef97c0179d474da"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.451382 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.456352 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.458943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"533a1fac-6603-4f9c-9a50-1095e44d1216","Type":"ContainerStarted","Data":"8f1d030ff006c5df5b0b88af161c2bfd87a16a58e178e42c089eb534a7c84b36"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.458992 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"533a1fac-6603-4f9c-9a50-1095e44d1216","Type":"ContainerStarted","Data":"9ece2e78b45af4b0cd9f5609fb9e13a7f4ffeb344c26b91d60fd0b82f314bd2a"} Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.460619 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2" Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.489649 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.500606 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t67t2"] Jan 27 15:53:16 crc kubenswrapper[4767]: I0127 15:53:16.508463 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" podStartSLOduration=27.508447479 podStartE2EDuration="27.508447479s" podCreationTimestamp="2026-01-27 15:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:53:16.505271436 +0000 UTC m=+218.894289369" watchObservedRunningTime="2026-01-27 15:53:16.508447479 +0000 UTC m=+218.897465002" Jan 27 15:53:16 crc kubenswrapper[4767]: E0127 15:53:16.610343 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 15:53:16 crc kubenswrapper[4767]: E0127 15:53:16.610495 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pdfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lbhhq_openshift-marketplace(5f897714-8bcf-4ec4-8be0-86dfb0fc4785): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:53:16 crc kubenswrapper[4767]: E0127 15:53:16.613351 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lbhhq" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" Jan 27 15:53:17 crc kubenswrapper[4767]: E0127 15:53:17.367850 4767 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 15:53:17 crc kubenswrapper[4767]: E0127 15:53:17.368256 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqznc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7pmbd_openshift-marketplace(43f8f2c5-51fc-4707-903f-fef9c5f133c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 15:53:17 crc kubenswrapper[4767]: E0127 15:53:17.370191 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7pmbd" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.467382 4767 generic.go:334] "Generic (PLEG): container finished" podID="533a1fac-6603-4f9c-9a50-1095e44d1216" containerID="8f1d030ff006c5df5b0b88af161c2bfd87a16a58e178e42c089eb534a7c84b36" exitCode=0 Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.467505 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"533a1fac-6603-4f9c-9a50-1095e44d1216","Type":"ContainerDied","Data":"8f1d030ff006c5df5b0b88af161c2bfd87a16a58e178e42c089eb534a7c84b36"} Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.470833 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r296r" event={"ID":"03660290-055d-4f50-be45-3d6d9c023b34","Type":"ContainerStarted","Data":"0a2595f51e12eace9ce91d7fec7b6fddbd86eff2a549dec3726b73d546f9335c"} Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.471829 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.472051 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.472123 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.476235 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.531079 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-r296r" podStartSLOduration=199.531053043 podStartE2EDuration="3m19.531053043s" podCreationTimestamp="2026-01-27 15:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:53:17.527459078 +0000 UTC m=+219.916476621" watchObservedRunningTime="2026-01-27 15:53:17.531053043 +0000 UTC m=+219.920070566" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.550614 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" podStartSLOduration=27.550579175 podStartE2EDuration="27.550579175s" podCreationTimestamp="2026-01-27 15:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:53:17.547898037 +0000 UTC m=+219.936915580" watchObservedRunningTime="2026-01-27 15:53:17.550579175 +0000 UTC m=+219.939596698" Jan 27 15:53:17 crc kubenswrapper[4767]: I0127 15:53:17.601958 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=21.601939169 podStartE2EDuration="21.601939169s" podCreationTimestamp="2026-01-27 15:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:53:17.597403186 +0000 UTC m=+219.986420719" watchObservedRunningTime="2026-01-27 15:53:17.601939169 +0000 UTC m=+219.990956692" Jan 27 15:53:17 crc kubenswrapper[4767]: E0127 15:53:17.710794 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lbhhq" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" Jan 27 15:53:17 crc kubenswrapper[4767]: E0127 15:53:17.710853 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7pmbd" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.332764 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c3a00d-6b69-4790-ba95-29ae01dd296f" path="/var/lib/kubelet/pods/34c3a00d-6b69-4790-ba95-29ae01dd296f/volumes" Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.478341 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerStarted","Data":"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd"} Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.483621 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerStarted","Data":"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7"} Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.491519 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerStarted","Data":"644f7ad50402f23801add9915649ea46c01caf3eff6a60907907dce139f0c7db"} Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.815774 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.931904 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access\") pod \"533a1fac-6603-4f9c-9a50-1095e44d1216\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.932058 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir\") pod \"533a1fac-6603-4f9c-9a50-1095e44d1216\" (UID: \"533a1fac-6603-4f9c-9a50-1095e44d1216\") " Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.932181 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "533a1fac-6603-4f9c-9a50-1095e44d1216" (UID: "533a1fac-6603-4f9c-9a50-1095e44d1216"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.932478 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/533a1fac-6603-4f9c-9a50-1095e44d1216-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:18 crc kubenswrapper[4767]: I0127 15:53:18.939781 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "533a1fac-6603-4f9c-9a50-1095e44d1216" (UID: "533a1fac-6603-4f9c-9a50-1095e44d1216"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.034341 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/533a1fac-6603-4f9c-9a50-1095e44d1216-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.499465 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.499531 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"533a1fac-6603-4f9c-9a50-1095e44d1216","Type":"ContainerDied","Data":"9ece2e78b45af4b0cd9f5609fb9e13a7f4ffeb344c26b91d60fd0b82f314bd2a"} Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.499951 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ece2e78b45af4b0cd9f5609fb9e13a7f4ffeb344c26b91d60fd0b82f314bd2a" Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.502723 4767 generic.go:334] "Generic (PLEG): container finished" podID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerID="e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd" exitCode=0 Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.502792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerDied","Data":"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd"} Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.513832 4767 generic.go:334] "Generic (PLEG): container finished" podID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerID="999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7" exitCode=0 Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.513910 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerDied","Data":"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7"} Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.518124 4767 generic.go:334] "Generic (PLEG): container finished" podID="eabb94a2-a935-40be-a094-1a71d904b222" containerID="644f7ad50402f23801add9915649ea46c01caf3eff6a60907907dce139f0c7db" exitCode=0 Jan 27 15:53:19 crc kubenswrapper[4767]: I0127 15:53:19.518214 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerDied","Data":"644f7ad50402f23801add9915649ea46c01caf3eff6a60907907dce139f0c7db"} Jan 27 15:53:21 crc kubenswrapper[4767]: I0127 15:53:21.313146 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:21 crc kubenswrapper[4767]: I0127 15:53:21.313165 4767 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksqxd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 27 15:53:21 crc kubenswrapper[4767]: I0127 15:53:21.313214 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:21 crc kubenswrapper[4767]: I0127 15:53:21.313214 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksqxd" podUID="25e39933-042b-46a8-9e96-19acb0944e08" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 27 15:53:23 crc kubenswrapper[4767]: I0127 15:53:23.543051 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerStarted","Data":"51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23"} Jan 27 15:53:23 crc kubenswrapper[4767]: I0127 15:53:23.567952 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wm4cz" podStartSLOduration=3.696307268 podStartE2EDuration="1m12.567932226s" podCreationTimestamp="2026-01-27 15:52:11 +0000 UTC" firstStartedPulling="2026-01-27 15:52:13.944206546 +0000 UTC m=+156.333224069" lastFinishedPulling="2026-01-27 15:53:22.815831504 +0000 UTC m=+225.204849027" observedRunningTime="2026-01-27 15:53:23.564006751 +0000 UTC m=+225.953024274" watchObservedRunningTime="2026-01-27 15:53:23.567932226 +0000 UTC m=+225.956949749" Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.551956 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerStarted","Data":"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa"} Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.577822 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6v8jc" podStartSLOduration=2.8849194799999998 podStartE2EDuration="1m13.577792907s" podCreationTimestamp="2026-01-27 15:52:11 +0000 UTC" firstStartedPulling="2026-01-27 15:52:12.902649407 +0000 UTC m=+155.291666940" lastFinishedPulling="2026-01-27 15:53:23.595522844 +0000 UTC m=+225.984540367" observedRunningTime="2026-01-27 15:53:24.574971685 +0000 UTC m=+226.963989218" watchObservedRunningTime="2026-01-27 15:53:24.577792907 +0000 UTC m=+226.966810430" Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.858163 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.858899 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.858986 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.861474 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:53:24 crc kubenswrapper[4767]: I0127 15:53:24.861571 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a" gracePeriod=600 Jan 27 15:53:26 crc kubenswrapper[4767]: I0127 15:53:26.571321 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerStarted","Data":"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b"} Jan 27 15:53:26 crc kubenswrapper[4767]: I0127 15:53:26.573417 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a" exitCode=0 Jan 27 15:53:26 crc kubenswrapper[4767]: I0127 15:53:26.573453 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a"} Jan 27 15:53:27 crc kubenswrapper[4767]: I0127 15:53:27.600039 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bnmj9" podStartSLOduration=4.658507608 podStartE2EDuration="1m13.600023645s" podCreationTimestamp="2026-01-27 15:52:14 +0000 UTC" firstStartedPulling="2026-01-27 15:52:15.996257831 +0000 UTC m=+158.385275354" lastFinishedPulling="2026-01-27 15:53:24.937773868 +0000 UTC m=+227.326791391" observedRunningTime="2026-01-27 15:53:27.597333436 +0000 UTC m=+229.986350959" watchObservedRunningTime="2026-01-27 15:53:27.600023645 +0000 UTC m=+229.989041168" Jan 27 15:53:28 crc kubenswrapper[4767]: I0127 15:53:28.601471 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53"} Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.333392 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ksqxd" Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.445862 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.445920 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.618353 4767 generic.go:334] "Generic (PLEG): container finished" podID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerID="865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1" exitCode=0 Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.618428 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerDied","Data":"865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1"} Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.861245 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:31 crc kubenswrapper[4767]: I0127 15:53:31.861549 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:32 crc kubenswrapper[4767]: I0127 15:53:32.883802 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:53:32 crc kubenswrapper[4767]: I0127 15:53:32.884794 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:32 crc kubenswrapper[4767]: I0127 15:53:32.950850 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:53:33 crc kubenswrapper[4767]: I0127 15:53:33.681697 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:34 crc kubenswrapper[4767]: I0127 15:53:34.595316 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:53:34 crc kubenswrapper[4767]: I0127 15:53:34.595680 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:53:34 crc kubenswrapper[4767]: I0127 15:53:34.658754 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:53:34 crc kubenswrapper[4767]: I0127 15:53:34.724301 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:53:34 crc kubenswrapper[4767]: I0127 15:53:34.856778 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:53:35 crc kubenswrapper[4767]: I0127 15:53:35.646227 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wm4cz" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="registry-server" containerID="cri-o://51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" gracePeriod=2 Jan 27 15:53:37 crc kubenswrapper[4767]: I0127 15:53:37.656337 4767 generic.go:334] "Generic (PLEG): container finished" podID="eabb94a2-a935-40be-a094-1a71d904b222" containerID="51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" exitCode=0 Jan 27 15:53:37 crc kubenswrapper[4767]: I0127 15:53:37.656405 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerDied","Data":"51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23"} Jan 27 15:53:41 crc kubenswrapper[4767]: E0127 15:53:41.861903 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23 is running failed: container process not found" containerID="51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:53:41 crc kubenswrapper[4767]: E0127 15:53:41.862701 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23 is running failed: container process not found" containerID="51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:53:41 crc kubenswrapper[4767]: E0127 15:53:41.863003 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23 is running failed: container process not found" containerID="51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:53:41 crc kubenswrapper[4767]: E0127 15:53:41.863077 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wm4cz" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="registry-server" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.201059 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.359990 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content\") pod \"eabb94a2-a935-40be-a094-1a71d904b222\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.360420 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcw7\" (UniqueName: \"kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7\") pod \"eabb94a2-a935-40be-a094-1a71d904b222\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.360451 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities\") pod \"eabb94a2-a935-40be-a094-1a71d904b222\" (UID: \"eabb94a2-a935-40be-a094-1a71d904b222\") " Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.362055 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities" (OuterVolumeSpecName: "utilities") pod "eabb94a2-a935-40be-a094-1a71d904b222" (UID: "eabb94a2-a935-40be-a094-1a71d904b222"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.367244 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7" (OuterVolumeSpecName: "kube-api-access-wtcw7") pod "eabb94a2-a935-40be-a094-1a71d904b222" (UID: "eabb94a2-a935-40be-a094-1a71d904b222"). InnerVolumeSpecName "kube-api-access-wtcw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.413725 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eabb94a2-a935-40be-a094-1a71d904b222" (UID: "eabb94a2-a935-40be-a094-1a71d904b222"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.462096 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.462143 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcw7\" (UniqueName: \"kubernetes.io/projected/eabb94a2-a935-40be-a094-1a71d904b222-kube-api-access-wtcw7\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.462160 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eabb94a2-a935-40be-a094-1a71d904b222-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.681238 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wm4cz" event={"ID":"eabb94a2-a935-40be-a094-1a71d904b222","Type":"ContainerDied","Data":"3bc3250ad0e1f805e03d662a12a603e79165b9180662501c189df47212d1d88d"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.681269 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wm4cz" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.681299 4767 scope.go:117] "RemoveContainer" containerID="51979f44873623c6fa42dca380545756f8b8a9f43d7cbaaadc361a3578c25f23" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.689335 4767 generic.go:334] "Generic (PLEG): container finished" podID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerID="5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae" exitCode=0 Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.689436 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerDied","Data":"5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.696872 4767 generic.go:334] "Generic (PLEG): container finished" podID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerID="7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50" exitCode=0 Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.696928 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerDied","Data":"7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.699922 4767 generic.go:334] "Generic (PLEG): container finished" podID="53c82776-5f8d-496e-a045-428e96b9f87c" containerID="d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066" exitCode=0 Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.699987 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerDied","Data":"d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.702664 4767 generic.go:334] "Generic (PLEG): container finished" podID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerID="9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2" exitCode=0 Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.702718 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerDied","Data":"9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.732826 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerStarted","Data":"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b"} Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.779834 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7nshp" podStartSLOduration=2.605360008 podStartE2EDuration="1m28.779803884s" podCreationTimestamp="2026-01-27 15:52:14 +0000 UTC" firstStartedPulling="2026-01-27 15:52:15.992675337 +0000 UTC m=+158.381692860" lastFinishedPulling="2026-01-27 15:53:42.167119193 +0000 UTC m=+244.556136736" observedRunningTime="2026-01-27 15:53:42.77866702 +0000 UTC m=+245.167684553" watchObservedRunningTime="2026-01-27 15:53:42.779803884 +0000 UTC m=+245.168821407" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.793911 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.797837 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wm4cz"] Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.802023 4767 scope.go:117] "RemoveContainer" containerID="644f7ad50402f23801add9915649ea46c01caf3eff6a60907907dce139f0c7db" Jan 27 15:53:42 crc kubenswrapper[4767]: I0127 15:53:42.829084 4767 scope.go:117] "RemoveContainer" containerID="0d9d05b61dd42b5b0a979f260efc5b9b7728ebf6e39ad4726422953386c24d6e" Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.740741 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerStarted","Data":"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f"} Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.745174 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerStarted","Data":"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927"} Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.748159 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerStarted","Data":"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245"} Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.750358 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerStarted","Data":"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f"} Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.768278 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lbhhq" podStartSLOduration=3.451505973 podStartE2EDuration="1m32.768259448s" podCreationTimestamp="2026-01-27 15:52:11 +0000 UTC" firstStartedPulling="2026-01-27 15:52:13.93642245 +0000 UTC m=+156.325439973" lastFinishedPulling="2026-01-27 15:53:43.253175915 +0000 UTC m=+245.642193448" observedRunningTime="2026-01-27 15:53:43.767344851 +0000 UTC m=+246.156362394" watchObservedRunningTime="2026-01-27 15:53:43.768259448 +0000 UTC m=+246.157276971" Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.785725 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7pmbd" podStartSLOduration=3.565801157 podStartE2EDuration="1m30.785704469s" podCreationTimestamp="2026-01-27 15:52:13 +0000 UTC" firstStartedPulling="2026-01-27 15:52:16.000945457 +0000 UTC m=+158.389963000" lastFinishedPulling="2026-01-27 15:53:43.220848789 +0000 UTC m=+245.609866312" observedRunningTime="2026-01-27 15:53:43.78301076 +0000 UTC m=+246.172028293" watchObservedRunningTime="2026-01-27 15:53:43.785704469 +0000 UTC m=+246.174721992" Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.802705 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7zcn" podStartSLOduration=4.645288473 podStartE2EDuration="1m32.802690746s" podCreationTimestamp="2026-01-27 15:52:11 +0000 UTC" firstStartedPulling="2026-01-27 15:52:14.991654444 +0000 UTC m=+157.380671967" lastFinishedPulling="2026-01-27 15:53:43.149056717 +0000 UTC m=+245.538074240" observedRunningTime="2026-01-27 15:53:43.801429689 +0000 UTC m=+246.190447212" watchObservedRunningTime="2026-01-27 15:53:43.802690746 +0000 UTC m=+246.191708269" Jan 27 15:53:43 crc kubenswrapper[4767]: I0127 15:53:43.829195 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pz42" podStartSLOduration=3.7283032560000002 podStartE2EDuration="1m30.829176512s" podCreationTimestamp="2026-01-27 15:52:13 +0000 UTC" firstStartedPulling="2026-01-27 15:52:16.001459822 +0000 UTC m=+158.390477345" lastFinishedPulling="2026-01-27 15:53:43.102333078 +0000 UTC m=+245.491350601" observedRunningTime="2026-01-27 15:53:43.826577576 +0000 UTC m=+246.215595099" watchObservedRunningTime="2026-01-27 15:53:43.829176512 +0000 UTC m=+246.218194035" Jan 27 15:53:44 crc kubenswrapper[4767]: I0127 15:53:44.025464 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:53:44 crc kubenswrapper[4767]: I0127 15:53:44.025526 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:53:44 crc kubenswrapper[4767]: I0127 15:53:44.332275 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabb94a2-a935-40be-a094-1a71d904b222" path="/var/lib/kubelet/pods/eabb94a2-a935-40be-a094-1a71d904b222/volumes" Jan 27 15:53:44 crc kubenswrapper[4767]: I0127 15:53:44.969240 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:53:44 crc kubenswrapper[4767]: I0127 15:53:44.969670 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:53:45 crc kubenswrapper[4767]: I0127 15:53:45.060734 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7pmbd" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="registry-server" probeResult="failure" output=< Jan 27 15:53:45 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 15:53:45 crc kubenswrapper[4767]: > Jan 27 15:53:46 crc kubenswrapper[4767]: I0127 15:53:46.014857 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7nshp" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="registry-server" probeResult="failure" output=< Jan 27 15:53:46 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 15:53:46 crc kubenswrapper[4767]: > Jan 27 15:53:51 crc kubenswrapper[4767]: I0127 15:53:51.024418 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tqzlw"] Jan 27 15:53:51 crc kubenswrapper[4767]: I0127 15:53:51.606481 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:53:51 crc kubenswrapper[4767]: I0127 15:53:51.606539 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:53:51 crc kubenswrapper[4767]: I0127 15:53:51.662867 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:53:51 crc kubenswrapper[4767]: I0127 15:53:51.836635 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:53:52 crc kubenswrapper[4767]: I0127 15:53:52.048665 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:53:52 crc kubenswrapper[4767]: I0127 15:53:52.048714 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:53:52 crc kubenswrapper[4767]: I0127 15:53:52.085552 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:53:52 crc kubenswrapper[4767]: I0127 15:53:52.834332 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.582320 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.582390 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.620252 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.851543 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.891976 4767 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.892527 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892549 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.892573 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="extract-content" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892581 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="extract-content" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.892591 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="extract-utilities" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892598 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="extract-utilities" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.892609 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a1fac-6603-4f9c-9a50-1095e44d1216" containerName="pruner" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892615 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a1fac-6603-4f9c-9a50-1095e44d1216" containerName="pruner" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892711 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a1fac-6603-4f9c-9a50-1095e44d1216" containerName="pruner" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.892725 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabb94a2-a935-40be-a094-1a71d904b222" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.893193 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896101 4767 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896525 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd" gracePeriod=15 Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896608 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243" gracePeriod=15 Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896652 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9" gracePeriod=15 Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896701 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467" gracePeriod=15 Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.896609 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f" gracePeriod=15 Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.897825 4767 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898114 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898136 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898149 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898157 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898174 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898183 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898193 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898203 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898227 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898234 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898246 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898253 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 15:53:53 crc kubenswrapper[4767]: E0127 15:53:53.898268 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898277 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898478 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898493 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898507 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898517 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898548 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.898625 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 15:53:53 crc kubenswrapper[4767]: I0127 15:53:53.947381 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000164 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000233 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000263 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000348 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000381 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000401 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.000495 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.065148 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.066633 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.066897 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.067076 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102307 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102365 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102393 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102420 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102447 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102473 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102496 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102541 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102608 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102644 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102685 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102711 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102734 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102757 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102778 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.102923 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.103728 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.103899 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.104058 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.232295 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:53:54 crc kubenswrapper[4767]: W0127 15:53:54.253427 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-42f265446909edb5079b2503f73c49186aa4e1339e3f05c7820f7875d5f47061 WatchSource:0}: Error finding container 42f265446909edb5079b2503f73c49186aa4e1339e3f05c7820f7875d5f47061: Status 404 returned error can't find the container with id 42f265446909edb5079b2503f73c49186aa4e1339e3f05c7820f7875d5f47061 Jan 27 15:53:54 crc kubenswrapper[4767]: E0127 15:53:54.256222 4767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea176937af78b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,LastTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:53:54 crc kubenswrapper[4767]: E0127 15:53:54.308612 4767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea176937af78b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,LastTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.811670 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e2f0186294f7f6eb3a8e167d43229abf5b6aa495ec1da88904b1cca4cdb832c9"} Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.812027 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"42f265446909edb5079b2503f73c49186aa4e1339e3f05c7820f7875d5f47061"} Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.812409 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.812820 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.814701 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816097 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816741 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f" exitCode=0 Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816831 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467" exitCode=0 Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816905 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9" exitCode=0 Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816967 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243" exitCode=2 Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.816837 4767 scope.go:117] "RemoveContainer" containerID="a1282299804620f7b88d9ee189c1a2a9dfea30fb6a0b861d811c21813a9aecde" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.819399 4767 generic.go:334] "Generic (PLEG): container finished" podID="696728d7-87d8-4e30-a896-472a5b86d1ca" containerID="e99af25e14750be6befe3efd59265293cdf70317fe2d422be204edb374b1a229" exitCode=0 Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.819489 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"696728d7-87d8-4e30-a896-472a5b86d1ca","Type":"ContainerDied","Data":"e99af25e14750be6befe3efd59265293cdf70317fe2d422be204edb374b1a229"} Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.820478 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.820912 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:54 crc kubenswrapper[4767]: I0127 15:53:54.821174 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.003901 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.004618 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.004863 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.005053 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.005300 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.041280 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.041861 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.042447 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.042976 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.043339 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:55 crc kubenswrapper[4767]: I0127 15:53:55.826942 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.124626 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.125610 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.126053 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.126260 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.126434 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.228455 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir\") pod \"696728d7-87d8-4e30-a896-472a5b86d1ca\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.228552 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access\") pod \"696728d7-87d8-4e30-a896-472a5b86d1ca\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.228582 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "696728d7-87d8-4e30-a896-472a5b86d1ca" (UID: "696728d7-87d8-4e30-a896-472a5b86d1ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.228602 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock\") pod \"696728d7-87d8-4e30-a896-472a5b86d1ca\" (UID: \"696728d7-87d8-4e30-a896-472a5b86d1ca\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.228636 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock" (OuterVolumeSpecName: "var-lock") pod "696728d7-87d8-4e30-a896-472a5b86d1ca" (UID: "696728d7-87d8-4e30-a896-472a5b86d1ca"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.229099 4767 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.229112 4767 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696728d7-87d8-4e30-a896-472a5b86d1ca-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.233511 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "696728d7-87d8-4e30-a896-472a5b86d1ca" (UID: "696728d7-87d8-4e30-a896-472a5b86d1ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.329715 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696728d7-87d8-4e30-a896-472a5b86d1ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.763061 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.763747 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.764316 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.764562 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.764797 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.765036 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.765460 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.835580 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.836407 4767 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd" exitCode=0 Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.836521 4767 scope.go:117] "RemoveContainer" containerID="f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.836749 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.838598 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"696728d7-87d8-4e30-a896-472a5b86d1ca","Type":"ContainerDied","Data":"946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a"} Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.838630 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946c2b3a62299b904d22a6dde2c480129e59f072eaed3f58abd8e8b0d81a551a" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.838644 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.844177 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.844416 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.844620 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.844934 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.845140 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.851325 4767 scope.go:117] "RemoveContainer" containerID="33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.865968 4767 scope.go:117] "RemoveContainer" containerID="0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.878769 4767 scope.go:117] "RemoveContainer" containerID="f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.899138 4767 scope.go:117] "RemoveContainer" containerID="5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.917237 4767 scope.go:117] "RemoveContainer" containerID="35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.935644 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.935741 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.935831 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.936288 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.936529 4767 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.936805 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.936916 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.942912 4767 scope.go:117] "RemoveContainer" containerID="f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.943426 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\": container with ID starting with f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f not found: ID does not exist" containerID="f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.943467 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f"} err="failed to get container status \"f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\": rpc error: code = NotFound desc = could not find container \"f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f\": container with ID starting with f2167ca66dfa8e2875a92da4d0269c5221a5f5da1dc1240bee4f032c73868a1f not found: ID does not exist" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.943536 4767 scope.go:117] "RemoveContainer" containerID="33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.943938 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\": container with ID starting with 33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467 not found: ID does not exist" containerID="33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.943962 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467"} err="failed to get container status \"33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\": rpc error: code = NotFound desc = could not find container \"33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467\": container with ID starting with 33f2c41554fe842c8d7fc645635144e1dc496121af9683e8652b159a6f667467 not found: ID does not exist" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.943977 4767 scope.go:117] "RemoveContainer" containerID="0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.944249 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\": container with ID starting with 0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9 not found: ID does not exist" containerID="0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.944669 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9"} err="failed to get container status \"0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\": rpc error: code = NotFound desc = could not find container \"0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9\": container with ID starting with 0b0535e5b48925ab64fe8a36fb6dbf737634c05b3b14e8c5170a949e4afb79c9 not found: ID does not exist" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.944776 4767 scope.go:117] "RemoveContainer" containerID="f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.945083 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\": container with ID starting with f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243 not found: ID does not exist" containerID="f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.945135 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243"} err="failed to get container status \"f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\": rpc error: code = NotFound desc = could not find container \"f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243\": container with ID starting with f82a3dd8d67fcb2da1d0fef9b755a0d7dbcf0c3e4e9359b91339eadb20686243 not found: ID does not exist" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.945157 4767 scope.go:117] "RemoveContainer" containerID="5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.945372 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\": container with ID starting with 5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd not found: ID does not exist" containerID="5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.945396 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd"} err="failed to get container status \"5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\": rpc error: code = NotFound desc = could not find container \"5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd\": container with ID starting with 5b84f5d09e4279f2278c0ac6c51c1e9f516de64900aba7f2badd9c7de10b7edd not found: ID does not exist" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.945411 4767 scope.go:117] "RemoveContainer" containerID="35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220" Jan 27 15:53:56 crc kubenswrapper[4767]: E0127 15:53:56.945644 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\": container with ID starting with 35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220 not found: ID does not exist" containerID="35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220" Jan 27 15:53:56 crc kubenswrapper[4767]: I0127 15:53:56.945665 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220"} err="failed to get container status \"35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\": rpc error: code = NotFound desc = could not find container \"35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220\": container with ID starting with 35f99d8d0d8bed3bd317a5594138104a5641fa02d6aed2888f52021e3ceae220 not found: ID does not exist" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.038665 4767 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.038692 4767 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.153432 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.154146 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.154647 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.155026 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:57 crc kubenswrapper[4767]: I0127 15:53:57.155471 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.327564 4767 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.328431 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.328950 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.329321 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.329516 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:58 crc kubenswrapper[4767]: I0127 15:53:58.332318 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.756028 4767 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.756830 4767 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.757374 4767 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.757780 4767 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.758158 4767 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:53:59 crc kubenswrapper[4767]: I0127 15:53:59.758232 4767 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.758574 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Jan 27 15:53:59 crc kubenswrapper[4767]: E0127 15:53:59.959967 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Jan 27 15:54:00 crc kubenswrapper[4767]: E0127 15:54:00.360566 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Jan 27 15:54:01 crc kubenswrapper[4767]: E0127 15:54:01.162772 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Jan 27 15:54:02 crc kubenswrapper[4767]: E0127 15:54:02.764728 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Jan 27 15:54:04 crc kubenswrapper[4767]: E0127 15:54:04.309948 4767 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea176937af78b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,LastTimestamp:2026-01-27 15:53:54.255705995 +0000 UTC m=+256.644723518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 15:54:04 crc kubenswrapper[4767]: E0127 15:54:04.341772 4767 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" volumeName="registry-storage" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.324586 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.326498 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.326734 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.326975 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.328721 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.338785 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.338815 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:05 crc kubenswrapper[4767]: E0127 15:54:05.339162 4767 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.339692 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.887093 4767 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1e282b764c272c93baf1ead98daa717cf53be8e945a668b0e182e7da94e762c4" exitCode=0 Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.887187 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1e282b764c272c93baf1ead98daa717cf53be8e945a668b0e182e7da94e762c4"} Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.887430 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"af231307d1e4dff89ff287480d9d69d52875bb5289c6f82b889b187dba24259d"} Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.887696 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.887712 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:05 crc kubenswrapper[4767]: E0127 15:54:05.888125 4767 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.888242 4767 status_manager.go:851] "Failed to get status for pod" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.888651 4767 status_manager.go:851] "Failed to get status for pod" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" pod="openshift-marketplace/redhat-operators-7nshp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7nshp\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.888959 4767 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: I0127 15:54:05.889266 4767 status_manager.go:851] "Failed to get status for pod" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" pod="openshift-marketplace/redhat-marketplace-7pmbd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-7pmbd\": dial tcp 38.102.83.132:6443: connect: connection refused" Jan 27 15:54:05 crc kubenswrapper[4767]: E0127 15:54:05.966831 4767 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Jan 27 15:54:06 crc kubenswrapper[4767]: I0127 15:54:06.913476 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d51da1e8df90deab911dbb45afe8becd5ad574ab1ecfa5267747be3fd61544bd"} Jan 27 15:54:06 crc kubenswrapper[4767]: I0127 15:54:06.914019 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e7a49177e2a996662a088cee54b1207c73d8bba4de6000063448ba3cd063b2f1"} Jan 27 15:54:06 crc kubenswrapper[4767]: I0127 15:54:06.914032 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"019bfe0149df5ae52fa4f717ce024c26e676ac844cb6d4f014b7d61405b7218b"} Jan 27 15:54:06 crc kubenswrapper[4767]: I0127 15:54:06.914042 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e31fa66ed2123a05c7e975416b8ea41318e7cb07936e40b92d27feaf088da7ff"} Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.819924 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.819994 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.920731 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1d9f47634b5e2c25f15ced81e0e1e0c85be64fae83a90cbb540d6c1b9c130ac"} Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.920854 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.920944 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.920965 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.923640 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.923689 4767 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e" exitCode=1 Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.923717 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e"} Jan 27 15:54:07 crc kubenswrapper[4767]: I0127 15:54:07.924149 4767 scope.go:117] "RemoveContainer" containerID="9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e" Jan 27 15:54:08 crc kubenswrapper[4767]: I0127 15:54:08.427364 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:54:08 crc kubenswrapper[4767]: I0127 15:54:08.937284 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:54:08 crc kubenswrapper[4767]: I0127 15:54:08.937344 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7fdfca1acaf9c7566a19a9bb69e10f77ebfdb5eb008f1cd4a0aad3732c5d1d48"} Jan 27 15:54:09 crc kubenswrapper[4767]: I0127 15:54:09.245390 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:54:09 crc kubenswrapper[4767]: I0127 15:54:09.245572 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:54:09 crc kubenswrapper[4767]: I0127 15:54:09.245612 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:54:10 crc kubenswrapper[4767]: I0127 15:54:10.340121 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:10 crc kubenswrapper[4767]: I0127 15:54:10.340680 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:10 crc kubenswrapper[4767]: I0127 15:54:10.346776 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:12 crc kubenswrapper[4767]: I0127 15:54:12.931394 4767 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:12 crc kubenswrapper[4767]: I0127 15:54:12.960060 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:12 crc kubenswrapper[4767]: I0127 15:54:12.960351 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:12 crc kubenswrapper[4767]: I0127 15:54:12.963370 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:13 crc kubenswrapper[4767]: I0127 15:54:13.029865 4767 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47d349bb-acaa-46b0-b02c-ce53450c6ad6" Jan 27 15:54:13 crc kubenswrapper[4767]: I0127 15:54:13.964750 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:13 crc kubenswrapper[4767]: I0127 15:54:13.965345 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:13 crc kubenswrapper[4767]: I0127 15:54:13.969107 4767 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47d349bb-acaa-46b0-b02c-ce53450c6ad6" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.057173 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" containerID="cri-o://2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542" gracePeriod=15 Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.435678 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607805 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607834 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607868 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607893 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607945 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607961 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607977 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.607996 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608010 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtcks\" (UniqueName: \"kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608038 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608060 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608099 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608126 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs\") pod \"9bc30087-3b0d-441b-b384-853b7e1003ad\" (UID: \"9bc30087-3b0d-441b-b384-853b7e1003ad\") " Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.608801 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.609320 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.609334 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.609412 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.609483 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.614231 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.614265 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks" (OuterVolumeSpecName: "kube-api-access-jtcks") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "kube-api-access-jtcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.614613 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.614956 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.615137 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.615504 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.615874 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.616571 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.619505 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9bc30087-3b0d-441b-b384-853b7e1003ad" (UID: "9bc30087-3b0d-441b-b384-853b7e1003ad"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709467 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709687 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709756 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709812 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709878 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709933 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.709988 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710044 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710102 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710165 4767 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710242 4767 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9bc30087-3b0d-441b-b384-853b7e1003ad-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710300 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710354 4767 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9bc30087-3b0d-441b-b384-853b7e1003ad-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.710418 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtcks\" (UniqueName: \"kubernetes.io/projected/9bc30087-3b0d-441b-b384-853b7e1003ad-kube-api-access-jtcks\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.982274 4767 generic.go:334] "Generic (PLEG): container finished" podID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerID="2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542" exitCode=0 Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.982352 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" event={"ID":"9bc30087-3b0d-441b-b384-853b7e1003ad","Type":"ContainerDied","Data":"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542"} Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.982416 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" event={"ID":"9bc30087-3b0d-441b-b384-853b7e1003ad","Type":"ContainerDied","Data":"b067bfa0872a5da37affc6eb98c088d2d27e9dfca3b4b7f8fbd83628c377aa2f"} Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.982442 4767 scope.go:117] "RemoveContainer" containerID="2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542" Jan 27 15:54:16 crc kubenswrapper[4767]: I0127 15:54:16.982691 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tqzlw" Jan 27 15:54:17 crc kubenswrapper[4767]: I0127 15:54:17.012319 4767 scope.go:117] "RemoveContainer" containerID="2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542" Jan 27 15:54:17 crc kubenswrapper[4767]: E0127 15:54:17.012853 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542\": container with ID starting with 2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542 not found: ID does not exist" containerID="2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542" Jan 27 15:54:17 crc kubenswrapper[4767]: I0127 15:54:17.012907 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542"} err="failed to get container status \"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542\": rpc error: code = NotFound desc = could not find container \"2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542\": container with ID starting with 2d127503c268c098fe2cb1c67771ec45bedfe60e274619421bdf3ee7c0bd0542 not found: ID does not exist" Jan 27 15:54:18 crc kubenswrapper[4767]: I0127 15:54:18.427132 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:54:19 crc kubenswrapper[4767]: I0127 15:54:19.245400 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:54:19 crc kubenswrapper[4767]: I0127 15:54:19.245679 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:54:19 crc kubenswrapper[4767]: I0127 15:54:19.959549 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 15:54:20 crc kubenswrapper[4767]: I0127 15:54:20.010721 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 15:54:20 crc kubenswrapper[4767]: I0127 15:54:20.933805 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 15:54:21 crc kubenswrapper[4767]: I0127 15:54:21.453018 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 15:54:22 crc kubenswrapper[4767]: I0127 15:54:22.631781 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 15:54:23 crc kubenswrapper[4767]: I0127 15:54:23.292331 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 15:54:23 crc kubenswrapper[4767]: I0127 15:54:23.964460 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 15:54:24 crc kubenswrapper[4767]: I0127 15:54:24.290481 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 15:54:24 crc kubenswrapper[4767]: I0127 15:54:24.425890 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 15:54:24 crc kubenswrapper[4767]: I0127 15:54:24.551579 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 15:54:25 crc kubenswrapper[4767]: I0127 15:54:25.123264 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 15:54:25 crc kubenswrapper[4767]: I0127 15:54:25.305971 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 15:54:25 crc kubenswrapper[4767]: I0127 15:54:25.352590 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.092117 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.352522 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.408150 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.579262 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.580466 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.602855 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.654620 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.745343 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.765222 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.859083 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.870615 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 15:54:26 crc kubenswrapper[4767]: I0127 15:54:26.876925 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.145238 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.231308 4767 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.397193 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.496537 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.507774 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.563934 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.865397 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.935066 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.985494 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 15:54:27 crc kubenswrapper[4767]: I0127 15:54:27.987400 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.014791 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.015010 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.173921 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.193969 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.215697 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.264647 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.282557 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.323965 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.406119 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.529694 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.583463 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.710930 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.755656 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.894629 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 15:54:28 crc kubenswrapper[4767]: I0127 15:54:28.924173 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.033591 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.135996 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.172453 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.215436 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.244787 4767 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.244851 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.244905 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.245557 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7fdfca1acaf9c7566a19a9bb69e10f77ebfdb5eb008f1cd4a0aad3732c5d1d48"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.245670 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7fdfca1acaf9c7566a19a9bb69e10f77ebfdb5eb008f1cd4a0aad3732c5d1d48" gracePeriod=30 Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.473261 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.483485 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.512340 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.513560 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.519000 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.683890 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.717614 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.784553 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.786588 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.788938 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.894034 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.908833 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.942432 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 15:54:29 crc kubenswrapper[4767]: I0127 15:54:29.993919 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.015113 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.319910 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.320863 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.377305 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.444397 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.463849 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.508984 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.522144 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.589456 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.633175 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.689355 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.815387 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.847699 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.849404 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 15:54:30 crc kubenswrapper[4767]: I0127 15:54:30.936148 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.022349 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.038418 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.040564 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.070771 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.121796 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.148919 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.154496 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.320721 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.359816 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.384306 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.531400 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.565395 4767 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.598038 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.650954 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.739685 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.747123 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.770035 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.859094 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.881352 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.896302 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.899583 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 15:54:31 crc kubenswrapper[4767]: I0127 15:54:31.943691 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.017486 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.062717 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.109522 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.183419 4767 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.199608 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.237756 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.273438 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.418945 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.443400 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.548606 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.560139 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.664537 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 15:54:32 crc kubenswrapper[4767]: I0127 15:54:32.849070 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.067589 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.151771 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.236740 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.344170 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.417260 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.454263 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.464388 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.479374 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.542854 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.547999 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.592040 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.668453 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.808064 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.820999 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.883648 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 15:54:33 crc kubenswrapper[4767]: I0127 15:54:33.884374 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.120844 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.157496 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.208973 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.251955 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.350524 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.398487 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.437117 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.437117 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.553352 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.553957 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.618125 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.634374 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.677494 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.716389 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.736922 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.796514 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.843580 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.945328 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 15:54:34 crc kubenswrapper[4767]: I0127 15:54:34.952485 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.001436 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.093331 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.166747 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.196304 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.248731 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.312163 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.345059 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.463661 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.595752 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.615077 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.689634 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.749849 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.814388 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.843674 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.903726 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.928489 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.945781 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.962801 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.990994 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 15:54:35 crc kubenswrapper[4767]: I0127 15:54:35.998649 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.031137 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.094155 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.180987 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.202620 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.286999 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.300537 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.322117 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.326715 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.409147 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.415001 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.420653 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.604609 4767 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.606352 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.606782 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.606769513 podStartE2EDuration="43.606769513s" podCreationTimestamp="2026-01-27 15:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:54:12.973863506 +0000 UTC m=+275.362881029" watchObservedRunningTime="2026-01-27 15:54:36.606769513 +0000 UTC m=+298.995787036" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609064 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-tqzlw"] Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609110 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz"] Jan 27 15:54:36 crc kubenswrapper[4767]: E0127 15:54:36.609300 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609317 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" Jan 27 15:54:36 crc kubenswrapper[4767]: E0127 15:54:36.609327 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" containerName="installer" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609334 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" containerName="installer" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609428 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="696728d7-87d8-4e30-a896-472a5b86d1ca" containerName="installer" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609440 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" containerName="oauth-openshift" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609567 4767 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.609593 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bff4254-e814-4da3-bea2-c1167d764153" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.619599 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.622695 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623297 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623361 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623456 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623684 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623787 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.623867 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.624000 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.624146 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.624983 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.625448 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.628790 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.629333 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.635225 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.639217 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.644388 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.659474 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.678904 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.678883254 podStartE2EDuration="24.678883254s" podCreationTimestamp="2026-01-27 15:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:54:36.67490643 +0000 UTC m=+299.063923953" watchObservedRunningTime="2026-01-27 15:54:36.678883254 +0000 UTC m=+299.067900777" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.741417 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.743103 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762097 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762149 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-error\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762177 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762212 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-login\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762278 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2w9m\" (UniqueName: \"kubernetes.io/projected/e48679bb-1ace-47f6-97ed-18189b4c469c-kube-api-access-m2w9m\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762333 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762351 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762368 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-dir\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762473 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762490 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-session\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762522 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-policies\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762540 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.762561 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864092 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864141 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-error\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864167 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864216 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-login\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864250 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864277 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2w9m\" (UniqueName: \"kubernetes.io/projected/e48679bb-1ace-47f6-97ed-18189b4c469c-kube-api-access-m2w9m\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864295 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864314 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864330 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-dir\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864348 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864369 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-session\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864400 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-policies\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864418 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.864438 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.865409 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-service-ca\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.865560 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-dir\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.865612 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.865665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-audit-policies\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.865747 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.873075 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.873128 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-session\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.873149 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.873502 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.873993 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-login\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.874356 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-error\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.876578 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.876599 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.876645 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e48679bb-1ace-47f6-97ed-18189b4c469c-v4-0-config-system-router-certs\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.887936 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.891961 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2w9m\" (UniqueName: \"kubernetes.io/projected/e48679bb-1ace-47f6-97ed-18189b4c469c-kube-api-access-m2w9m\") pod \"oauth-openshift-6467d9dbc9-n8gdz\" (UID: \"e48679bb-1ace-47f6-97ed-18189b4c469c\") " pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:36 crc kubenswrapper[4767]: I0127 15:54:36.937079 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.150715 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.228687 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.241406 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.266402 4767 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.319184 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.394644 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.464538 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.530067 4767 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.552499 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.607923 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.778694 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 15:54:37 crc kubenswrapper[4767]: I0127 15:54:37.901063 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.124180 4767 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.163639 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.217706 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.227535 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.332612 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc30087-3b0d-441b-b384-853b7e1003ad" path="/var/lib/kubelet/pods/9bc30087-3b0d-441b-b384-853b7e1003ad/volumes" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.377651 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.410241 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz"] Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.677400 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.775535 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz"] Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.862313 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.885569 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 15:54:38 crc kubenswrapper[4767]: I0127 15:54:38.966107 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.027727 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.051913 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.083529 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.102939 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" event={"ID":"e48679bb-1ace-47f6-97ed-18189b4c469c","Type":"ContainerStarted","Data":"b0d618e3691ce43b92ead4184bde68acd3c6cb862473f2b61978278cd7fc860f"} Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.102983 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" event={"ID":"e48679bb-1ace-47f6-97ed-18189b4c469c","Type":"ContainerStarted","Data":"40e88558aa49270095e2e8a7a20df0622814f46aa2016a425a0b5f0ca55e78fa"} Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.104239 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.106045 4767 patch_prober.go:28] interesting pod/oauth-openshift-6467d9dbc9-n8gdz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.58:6443/healthz\": dial tcp 10.217.0.58:6443: connect: connection refused" start-of-body= Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.106105 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" podUID="e48679bb-1ace-47f6-97ed-18189b4c469c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.58:6443/healthz\": dial tcp 10.217.0.58:6443: connect: connection refused" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.129495 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.132541 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.204762 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.211217 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.232707 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.254417 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.258043 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.469886 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.592450 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.634145 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.822897 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.911751 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 15:54:39 crc kubenswrapper[4767]: I0127 15:54:39.938949 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 15:54:40 crc kubenswrapper[4767]: I0127 15:54:40.111485 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" Jan 27 15:54:40 crc kubenswrapper[4767]: I0127 15:54:40.128880 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6467d9dbc9-n8gdz" podStartSLOduration=49.128859085 podStartE2EDuration="49.128859085s" podCreationTimestamp="2026-01-27 15:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:54:39.12950061 +0000 UTC m=+301.518518133" watchObservedRunningTime="2026-01-27 15:54:40.128859085 +0000 UTC m=+302.517876608" Jan 27 15:54:40 crc kubenswrapper[4767]: I0127 15:54:40.338118 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 15:54:40 crc kubenswrapper[4767]: I0127 15:54:40.395297 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:54:40 crc kubenswrapper[4767]: I0127 15:54:40.403875 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 15:54:41 crc kubenswrapper[4767]: I0127 15:54:41.034674 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 15:54:41 crc kubenswrapper[4767]: I0127 15:54:41.062879 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 15:54:41 crc kubenswrapper[4767]: I0127 15:54:41.175513 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 15:54:41 crc kubenswrapper[4767]: I0127 15:54:41.338680 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 15:54:41 crc kubenswrapper[4767]: I0127 15:54:41.985831 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 15:54:42 crc kubenswrapper[4767]: I0127 15:54:42.631976 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 15:54:43 crc kubenswrapper[4767]: I0127 15:54:43.472981 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 15:54:46 crc kubenswrapper[4767]: I0127 15:54:46.901650 4767 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:54:46 crc kubenswrapper[4767]: I0127 15:54:46.902238 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e2f0186294f7f6eb3a8e167d43229abf5b6aa495ec1da88904b1cca4cdb832c9" gracePeriod=5 Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.179177 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.179731 4767 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e2f0186294f7f6eb3a8e167d43229abf5b6aa495ec1da88904b1cca4cdb832c9" exitCode=137 Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.486869 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.486937 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572040 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572139 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572171 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572251 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572286 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572344 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572415 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572475 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572749 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572772 4767 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.572957 4767 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.573037 4767 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.623463 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.674759 4767 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:52 crc kubenswrapper[4767]: I0127 15:54:52.674795 4767 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:53 crc kubenswrapper[4767]: I0127 15:54:53.186296 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 15:54:53 crc kubenswrapper[4767]: I0127 15:54:53.186354 4767 scope.go:117] "RemoveContainer" containerID="e2f0186294f7f6eb3a8e167d43229abf5b6aa495ec1da88904b1cca4cdb832c9" Jan 27 15:54:53 crc kubenswrapper[4767]: I0127 15:54:53.186443 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.330819 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.331382 4767 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.343089 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.343136 4767 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d80d4c0f-54f4-4261-a6a5-d6fc5f97ec9c" Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.348913 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 15:54:54 crc kubenswrapper[4767]: I0127 15:54:54.349165 4767 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d80d4c0f-54f4-4261-a6a5-d6fc5f97ec9c" Jan 27 15:54:58 crc kubenswrapper[4767]: I0127 15:54:58.583298 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:54:58 crc kubenswrapper[4767]: I0127 15:54:58.583617 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" podUID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" containerName="controller-manager" containerID="cri-o://f571edc4d17e9fb98acd654b22022dd3c3a0c216fe8d4df68af4775d4d0e0a41" gracePeriod=30 Jan 27 15:54:58 crc kubenswrapper[4767]: I0127 15:54:58.589106 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:54:58 crc kubenswrapper[4767]: I0127 15:54:58.589695 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" podUID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" containerName="route-controller-manager" containerID="cri-o://69a828043105f2b295468b0d6c4ab84751b2d56cdfe03f84fefd9330db00970a" gracePeriod=30 Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.216515 4767 generic.go:334] "Generic (PLEG): container finished" podID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" containerID="f571edc4d17e9fb98acd654b22022dd3c3a0c216fe8d4df68af4775d4d0e0a41" exitCode=0 Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.216628 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" event={"ID":"c32bc6a2-2754-4fff-8018-f9791b6a8ced","Type":"ContainerDied","Data":"f571edc4d17e9fb98acd654b22022dd3c3a0c216fe8d4df68af4775d4d0e0a41"} Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.217984 4767 generic.go:334] "Generic (PLEG): container finished" podID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" containerID="69a828043105f2b295468b0d6c4ab84751b2d56cdfe03f84fefd9330db00970a" exitCode=0 Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.218014 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" event={"ID":"9c86e5b5-fabb-4448-85d3-44fbb1addf8a","Type":"ContainerDied","Data":"69a828043105f2b295468b0d6c4ab84751b2d56cdfe03f84fefd9330db00970a"} Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.468042 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.473199 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.563937 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wnb2\" (UniqueName: \"kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2\") pod \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.563982 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmldc\" (UniqueName: \"kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc\") pod \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564010 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca\") pod \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564089 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles\") pod \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564187 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca\") pod \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564231 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config\") pod \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564253 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert\") pod \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564286 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert\") pod \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\" (UID: \"9c86e5b5-fabb-4448-85d3-44fbb1addf8a\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564302 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config\") pod \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\" (UID: \"c32bc6a2-2754-4fff-8018-f9791b6a8ced\") " Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564911 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c86e5b5-fabb-4448-85d3-44fbb1addf8a" (UID: "9c86e5b5-fabb-4448-85d3-44fbb1addf8a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.564980 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config" (OuterVolumeSpecName: "config") pod "9c86e5b5-fabb-4448-85d3-44fbb1addf8a" (UID: "9c86e5b5-fabb-4448-85d3-44fbb1addf8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.565121 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca" (OuterVolumeSpecName: "client-ca") pod "c32bc6a2-2754-4fff-8018-f9791b6a8ced" (UID: "c32bc6a2-2754-4fff-8018-f9791b6a8ced"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.565149 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c32bc6a2-2754-4fff-8018-f9791b6a8ced" (UID: "c32bc6a2-2754-4fff-8018-f9791b6a8ced"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.565243 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config" (OuterVolumeSpecName: "config") pod "c32bc6a2-2754-4fff-8018-f9791b6a8ced" (UID: "c32bc6a2-2754-4fff-8018-f9791b6a8ced"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.570652 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c32bc6a2-2754-4fff-8018-f9791b6a8ced" (UID: "c32bc6a2-2754-4fff-8018-f9791b6a8ced"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.570680 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc" (OuterVolumeSpecName: "kube-api-access-lmldc") pod "c32bc6a2-2754-4fff-8018-f9791b6a8ced" (UID: "c32bc6a2-2754-4fff-8018-f9791b6a8ced"). InnerVolumeSpecName "kube-api-access-lmldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.570654 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c86e5b5-fabb-4448-85d3-44fbb1addf8a" (UID: "9c86e5b5-fabb-4448-85d3-44fbb1addf8a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.570666 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2" (OuterVolumeSpecName: "kube-api-access-5wnb2") pod "9c86e5b5-fabb-4448-85d3-44fbb1addf8a" (UID: "9c86e5b5-fabb-4448-85d3-44fbb1addf8a"). InnerVolumeSpecName "kube-api-access-5wnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666145 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666218 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666267 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666281 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32bc6a2-2754-4fff-8018-f9791b6a8ced-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666296 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32bc6a2-2754-4fff-8018-f9791b6a8ced-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666308 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666320 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wnb2\" (UniqueName: \"kubernetes.io/projected/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-kube-api-access-5wnb2\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666333 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmldc\" (UniqueName: \"kubernetes.io/projected/c32bc6a2-2754-4fff-8018-f9791b6a8ced-kube-api-access-lmldc\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.666345 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c86e5b5-fabb-4448-85d3-44fbb1addf8a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715138 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:54:59 crc kubenswrapper[4767]: E0127 15:54:59.715400 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715416 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:54:59 crc kubenswrapper[4767]: E0127 15:54:59.715436 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" containerName="controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715445 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" containerName="controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: E0127 15:54:59.715464 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" containerName="route-controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715471 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" containerName="route-controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715582 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" containerName="route-controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715597 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" containerName="controller-manager" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715607 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.715984 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.719049 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.719617 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.743727 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.749917 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.868844 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.868915 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crflk\" (UniqueName: \"kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.868935 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.868958 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2dw\" (UniqueName: \"kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.869097 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.869166 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.869230 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.869274 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.869312 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971141 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971184 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crflk\" (UniqueName: \"kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971240 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr2dw\" (UniqueName: \"kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971320 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971357 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971381 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971401 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971421 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.971443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.972511 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.972967 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.973566 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.973795 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.973915 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.977787 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.979826 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.988105 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crflk\" (UniqueName: \"kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk\") pod \"controller-manager-bc94876c4-75r8l\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:54:59 crc kubenswrapper[4767]: I0127 15:54:59.989079 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr2dw\" (UniqueName: \"kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw\") pod \"route-controller-manager-fc6cbc658-6jw6j\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.048996 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.057277 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.229446 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.235701 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.235755 4767 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7fdfca1acaf9c7566a19a9bb69e10f77ebfdb5eb008f1cd4a0aad3732c5d1d48" exitCode=137 Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.235834 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7fdfca1acaf9c7566a19a9bb69e10f77ebfdb5eb008f1cd4a0aad3732c5d1d48"} Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.235868 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"871182a0d102a33b56618810602ec3ef84ea2643a5ccf89a84b7012cca08cd11"} Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.235887 4767 scope.go:117] "RemoveContainer" containerID="9a0401c2e0dacdd1f9206f5efb140429092a40ab36630230eb54499d5515092e" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.248893 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.249476 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b6c8bc98-th2g2" event={"ID":"c32bc6a2-2754-4fff-8018-f9791b6a8ced","Type":"ContainerDied","Data":"d88068125e60dccdaa9495c251e6191ab701e8156bcbef6bcef97c0179d474da"} Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.251801 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" event={"ID":"9c86e5b5-fabb-4448-85d3-44fbb1addf8a","Type":"ContainerDied","Data":"7902a208df824ff001a685786c70a4b4e41b01948ad40dfd4d193ec0ce0b4c8d"} Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.251878 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.274650 4767 scope.go:117] "RemoveContainer" containerID="f571edc4d17e9fb98acd654b22022dd3c3a0c216fe8d4df68af4775d4d0e0a41" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.274783 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.293967 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.301816 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bb5d5478-tkjp2"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.308079 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.311874 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b6c8bc98-th2g2"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.315760 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.319707 4767 scope.go:117] "RemoveContainer" containerID="69a828043105f2b295468b0d6c4ab84751b2d56cdfe03f84fefd9330db00970a" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.334888 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c86e5b5-fabb-4448-85d3-44fbb1addf8a" path="/var/lib/kubelet/pods/9c86e5b5-fabb-4448-85d3-44fbb1addf8a/volumes" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.335734 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32bc6a2-2754-4fff-8018-f9791b6a8ced" path="/var/lib/kubelet/pods/c32bc6a2-2754-4fff-8018-f9791b6a8ced/volumes" Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.490942 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.491256 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7zcn" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="registry-server" containerID="cri-o://4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927" gracePeriod=2 Jan 27 15:55:00 crc kubenswrapper[4767]: I0127 15:55:00.938890 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.086516 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities\") pod \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.086580 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content\") pod \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.086625 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhp7k\" (UniqueName: \"kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k\") pod \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\" (UID: \"0e3e0a9a-9b2b-4cf4-9f92-847e870be858\") " Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.087719 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities" (OuterVolumeSpecName: "utilities") pod "0e3e0a9a-9b2b-4cf4-9f92-847e870be858" (UID: "0e3e0a9a-9b2b-4cf4-9f92-847e870be858"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.103427 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k" (OuterVolumeSpecName: "kube-api-access-lhp7k") pod "0e3e0a9a-9b2b-4cf4-9f92-847e870be858" (UID: "0e3e0a9a-9b2b-4cf4-9f92-847e870be858"). InnerVolumeSpecName "kube-api-access-lhp7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.137319 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e3e0a9a-9b2b-4cf4-9f92-847e870be858" (UID: "0e3e0a9a-9b2b-4cf4-9f92-847e870be858"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.187878 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhp7k\" (UniqueName: \"kubernetes.io/projected/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-kube-api-access-lhp7k\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.187921 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.187933 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3e0a9a-9b2b-4cf4-9f92-847e870be858-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.261615 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.274901 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" event={"ID":"941ea1e9-57d3-4452-bdce-dc901ec4dac7","Type":"ContainerStarted","Data":"8dbe51c156325c3719b6452211f10c2043e90363e9d68f61fa9f8b3372962e44"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.274943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" event={"ID":"941ea1e9-57d3-4452-bdce-dc901ec4dac7","Type":"ContainerStarted","Data":"281016183e73f69ce142c45ec5ebd6c8a38f87795c7c433a7caa8018471936d3"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.275378 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.276589 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" event={"ID":"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e","Type":"ContainerStarted","Data":"21a3684c9284d31a69994c299b823ad13ebd9b0dc1af743f5a121c6806279ffa"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.276627 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" event={"ID":"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e","Type":"ContainerStarted","Data":"d9afd00a033899ea9d2f60b33d31c4b45d1c41a1e17e9984f62c340cea0680f4"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.276778 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.279331 4767 generic.go:334] "Generic (PLEG): container finished" podID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerID="4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927" exitCode=0 Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.279382 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerDied","Data":"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.279420 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7zcn" event={"ID":"0e3e0a9a-9b2b-4cf4-9f92-847e870be858","Type":"ContainerDied","Data":"8a2af7d588d12012fb09138392caf0db63080be1ecd4b252324dc843e553d0ff"} Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.279449 4767 scope.go:117] "RemoveContainer" containerID="4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.279425 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7zcn" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.280385 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.282128 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.298706 4767 scope.go:117] "RemoveContainer" containerID="5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.310064 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" podStartSLOduration=3.310046444 podStartE2EDuration="3.310046444s" podCreationTimestamp="2026-01-27 15:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:55:01.29194761 +0000 UTC m=+323.680965143" watchObservedRunningTime="2026-01-27 15:55:01.310046444 +0000 UTC m=+323.699063957" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.317437 4767 scope.go:117] "RemoveContainer" containerID="8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.324755 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" podStartSLOduration=3.324734932 podStartE2EDuration="3.324734932s" podCreationTimestamp="2026-01-27 15:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:55:01.321907361 +0000 UTC m=+323.710924884" watchObservedRunningTime="2026-01-27 15:55:01.324734932 +0000 UTC m=+323.713752455" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.335850 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.337509 4767 scope.go:117] "RemoveContainer" containerID="4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927" Jan 27 15:55:01 crc kubenswrapper[4767]: E0127 15:55:01.338016 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927\": container with ID starting with 4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927 not found: ID does not exist" containerID="4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.338053 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927"} err="failed to get container status \"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927\": rpc error: code = NotFound desc = could not find container \"4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927\": container with ID starting with 4ad907d3e3d0894c4c199649e95549e18caee26bff8879e3696214372d095927 not found: ID does not exist" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.338089 4767 scope.go:117] "RemoveContainer" containerID="5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae" Jan 27 15:55:01 crc kubenswrapper[4767]: E0127 15:55:01.338567 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae\": container with ID starting with 5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae not found: ID does not exist" containerID="5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.338596 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae"} err="failed to get container status \"5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae\": rpc error: code = NotFound desc = could not find container \"5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae\": container with ID starting with 5391657d0de4f601a87219e013043c87121d17a7045d84052ab659d724d957ae not found: ID does not exist" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.338616 4767 scope.go:117] "RemoveContainer" containerID="8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525" Jan 27 15:55:01 crc kubenswrapper[4767]: E0127 15:55:01.338893 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525\": container with ID starting with 8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525 not found: ID does not exist" containerID="8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.338915 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525"} err="failed to get container status \"8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525\": rpc error: code = NotFound desc = could not find container \"8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525\": container with ID starting with 8bc4395e018f0d60079f9755ba136cdeeb3f3b775080e2c213eae42b54632525 not found: ID does not exist" Jan 27 15:55:01 crc kubenswrapper[4767]: I0127 15:55:01.339178 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7zcn"] Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.294407 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.295133 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7pmbd" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="registry-server" containerID="cri-o://bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245" gracePeriod=2 Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.333188 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" path="/var/lib/kubelet/pods/0e3e0a9a-9b2b-4cf4-9f92-847e870be858/volumes" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.666996 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.820150 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities\") pod \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.820280 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content\") pod \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.820315 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqznc\" (UniqueName: \"kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc\") pod \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\" (UID: \"43f8f2c5-51fc-4707-903f-fef9c5f133c5\") " Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.821109 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities" (OuterVolumeSpecName: "utilities") pod "43f8f2c5-51fc-4707-903f-fef9c5f133c5" (UID: "43f8f2c5-51fc-4707-903f-fef9c5f133c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.825984 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc" (OuterVolumeSpecName: "kube-api-access-mqznc") pod "43f8f2c5-51fc-4707-903f-fef9c5f133c5" (UID: "43f8f2c5-51fc-4707-903f-fef9c5f133c5"). InnerVolumeSpecName "kube-api-access-mqznc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.848470 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43f8f2c5-51fc-4707-903f-fef9c5f133c5" (UID: "43f8f2c5-51fc-4707-903f-fef9c5f133c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.891868 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.892155 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7nshp" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="registry-server" containerID="cri-o://190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b" gracePeriod=2 Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.922179 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.922646 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqznc\" (UniqueName: \"kubernetes.io/projected/43f8f2c5-51fc-4707-903f-fef9c5f133c5-kube-api-access-mqznc\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:02 crc kubenswrapper[4767]: I0127 15:55:02.922661 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8f2c5-51fc-4707-903f-fef9c5f133c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.289027 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.296089 4767 generic.go:334] "Generic (PLEG): container finished" podID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerID="190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b" exitCode=0 Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.296228 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerDied","Data":"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b"} Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.296297 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7nshp" event={"ID":"84510a56-8f29-404f-b5eb-c7433db1de6b","Type":"ContainerDied","Data":"704c876a572f6a84bda36cb1dd8099990bbbe8793f99d67771c1d18033ed6126"} Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.296321 4767 scope.go:117] "RemoveContainer" containerID="190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.296186 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7nshp" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.308624 4767 generic.go:334] "Generic (PLEG): container finished" podID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerID="bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245" exitCode=0 Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.308684 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7pmbd" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.308713 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerDied","Data":"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245"} Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.308743 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7pmbd" event={"ID":"43f8f2c5-51fc-4707-903f-fef9c5f133c5","Type":"ContainerDied","Data":"09b45ca304ced8abef2827abfa263e2db0e60eb45232396068773db79ac118dd"} Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.320815 4767 scope.go:117] "RemoveContainer" containerID="865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.345536 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.352995 4767 scope.go:117] "RemoveContainer" containerID="daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.353406 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7pmbd"] Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.384278 4767 scope.go:117] "RemoveContainer" containerID="190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.385467 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b\": container with ID starting with 190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b not found: ID does not exist" containerID="190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.385516 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b"} err="failed to get container status \"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b\": rpc error: code = NotFound desc = could not find container \"190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b\": container with ID starting with 190eb07d0f90303deb2aff0c966d689c5691643c64e91063107588037f02002b not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.385544 4767 scope.go:117] "RemoveContainer" containerID="865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.385853 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1\": container with ID starting with 865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1 not found: ID does not exist" containerID="865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.385995 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1"} err="failed to get container status \"865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1\": rpc error: code = NotFound desc = could not find container \"865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1\": container with ID starting with 865b8536ec8ecad7ab4cf2e26539c67aa8b53356a9b821c88d34700b32fdc8a1 not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.386138 4767 scope.go:117] "RemoveContainer" containerID="daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.386799 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10\": container with ID starting with daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10 not found: ID does not exist" containerID="daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.386826 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10"} err="failed to get container status \"daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10\": rpc error: code = NotFound desc = could not find container \"daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10\": container with ID starting with daf80ab02fc430eaacbce8691909fef3645d103333d20e15997a8b3d820eab10 not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.386842 4767 scope.go:117] "RemoveContainer" containerID="bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.404230 4767 scope.go:117] "RemoveContainer" containerID="7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.419672 4767 scope.go:117] "RemoveContainer" containerID="362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.428728 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2nbg\" (UniqueName: \"kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg\") pod \"84510a56-8f29-404f-b5eb-c7433db1de6b\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.429103 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content\") pod \"84510a56-8f29-404f-b5eb-c7433db1de6b\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.429362 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities\") pod \"84510a56-8f29-404f-b5eb-c7433db1de6b\" (UID: \"84510a56-8f29-404f-b5eb-c7433db1de6b\") " Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.430396 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities" (OuterVolumeSpecName: "utilities") pod "84510a56-8f29-404f-b5eb-c7433db1de6b" (UID: "84510a56-8f29-404f-b5eb-c7433db1de6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.433070 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg" (OuterVolumeSpecName: "kube-api-access-b2nbg") pod "84510a56-8f29-404f-b5eb-c7433db1de6b" (UID: "84510a56-8f29-404f-b5eb-c7433db1de6b"). InnerVolumeSpecName "kube-api-access-b2nbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.435441 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2nbg\" (UniqueName: \"kubernetes.io/projected/84510a56-8f29-404f-b5eb-c7433db1de6b-kube-api-access-b2nbg\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.435480 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.435896 4767 scope.go:117] "RemoveContainer" containerID="bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.436239 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245\": container with ID starting with bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245 not found: ID does not exist" containerID="bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.436291 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245"} err="failed to get container status \"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245\": rpc error: code = NotFound desc = could not find container \"bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245\": container with ID starting with bc0007bbc7a28a938a90d8cf529c3fe7ae33e83674e397c61aba655b747e2245 not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.436320 4767 scope.go:117] "RemoveContainer" containerID="7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.437001 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50\": container with ID starting with 7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50 not found: ID does not exist" containerID="7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.437148 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50"} err="failed to get container status \"7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50\": rpc error: code = NotFound desc = could not find container \"7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50\": container with ID starting with 7e54640c00bad73cb848619e349470f79cadd1083b48e25bd0634c11126e4d50 not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.437291 4767 scope.go:117] "RemoveContainer" containerID="362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb" Jan 27 15:55:03 crc kubenswrapper[4767]: E0127 15:55:03.437639 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb\": container with ID starting with 362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb not found: ID does not exist" containerID="362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.437669 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb"} err="failed to get container status \"362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb\": rpc error: code = NotFound desc = could not find container \"362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb\": container with ID starting with 362c9fc5434f75c0783042b9eda566b1a903d2bdc4234843a5b831b6596773cb not found: ID does not exist" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.563011 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84510a56-8f29-404f-b5eb-c7433db1de6b" (UID: "84510a56-8f29-404f-b5eb-c7433db1de6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.638040 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84510a56-8f29-404f-b5eb-c7433db1de6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.640158 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:55:03 crc kubenswrapper[4767]: I0127 15:55:03.647991 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7nshp"] Jan 27 15:55:04 crc kubenswrapper[4767]: I0127 15:55:04.337300 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" path="/var/lib/kubelet/pods/43f8f2c5-51fc-4707-903f-fef9c5f133c5/volumes" Jan 27 15:55:04 crc kubenswrapper[4767]: I0127 15:55:04.339382 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" path="/var/lib/kubelet/pods/84510a56-8f29-404f-b5eb-c7433db1de6b/volumes" Jan 27 15:55:08 crc kubenswrapper[4767]: I0127 15:55:08.426806 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:55:09 crc kubenswrapper[4767]: I0127 15:55:09.244989 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:55:09 crc kubenswrapper[4767]: I0127 15:55:09.249387 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:55:10 crc kubenswrapper[4767]: I0127 15:55:10.350891 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.243000 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.243742 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" podUID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" containerName="route-controller-manager" containerID="cri-o://21a3684c9284d31a69994c299b823ad13ebd9b0dc1af743f5a121c6806279ffa" gracePeriod=30 Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.262825 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.263080 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" podUID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" containerName="controller-manager" containerID="cri-o://8dbe51c156325c3719b6452211f10c2043e90363e9d68f61fa9f8b3372962e44" gracePeriod=30 Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.421699 4767 generic.go:334] "Generic (PLEG): container finished" podID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" containerID="8dbe51c156325c3719b6452211f10c2043e90363e9d68f61fa9f8b3372962e44" exitCode=0 Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.421767 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" event={"ID":"941ea1e9-57d3-4452-bdce-dc901ec4dac7","Type":"ContainerDied","Data":"8dbe51c156325c3719b6452211f10c2043e90363e9d68f61fa9f8b3372962e44"} Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.428627 4767 generic.go:334] "Generic (PLEG): container finished" podID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" containerID="21a3684c9284d31a69994c299b823ad13ebd9b0dc1af743f5a121c6806279ffa" exitCode=0 Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.428685 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" event={"ID":"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e","Type":"ContainerDied","Data":"21a3684c9284d31a69994c299b823ad13ebd9b0dc1af743f5a121c6806279ffa"} Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.794658 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.890861 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config\") pod \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.890946 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert\") pod \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.890977 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca\") pod \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.891024 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr2dw\" (UniqueName: \"kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw\") pod \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\" (UID: \"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.891860 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca" (OuterVolumeSpecName: "client-ca") pod "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" (UID: "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.892191 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config" (OuterVolumeSpecName: "config") pod "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" (UID: "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.898890 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw" (OuterVolumeSpecName: "kube-api-access-dr2dw") pod "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" (UID: "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e"). InnerVolumeSpecName "kube-api-access-dr2dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.899533 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" (UID: "92f930b3-7542-41d8-b8c5-1fb7e1fdb08e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.922923 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.991866 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles\") pod \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.991952 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert\") pod \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.991998 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca\") pod \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992070 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crflk\" (UniqueName: \"kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk\") pod \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992109 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config\") pod \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\" (UID: \"941ea1e9-57d3-4452-bdce-dc901ec4dac7\") " Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992364 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr2dw\" (UniqueName: \"kubernetes.io/projected/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-kube-api-access-dr2dw\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992387 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992399 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.992410 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.993760 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca" (OuterVolumeSpecName: "client-ca") pod "941ea1e9-57d3-4452-bdce-dc901ec4dac7" (UID: "941ea1e9-57d3-4452-bdce-dc901ec4dac7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.993840 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "941ea1e9-57d3-4452-bdce-dc901ec4dac7" (UID: "941ea1e9-57d3-4452-bdce-dc901ec4dac7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.993859 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config" (OuterVolumeSpecName: "config") pod "941ea1e9-57d3-4452-bdce-dc901ec4dac7" (UID: "941ea1e9-57d3-4452-bdce-dc901ec4dac7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.995603 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk" (OuterVolumeSpecName: "kube-api-access-crflk") pod "941ea1e9-57d3-4452-bdce-dc901ec4dac7" (UID: "941ea1e9-57d3-4452-bdce-dc901ec4dac7"). InnerVolumeSpecName "kube-api-access-crflk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:55:22 crc kubenswrapper[4767]: I0127 15:55:22.996501 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "941ea1e9-57d3-4452-bdce-dc901ec4dac7" (UID: "941ea1e9-57d3-4452-bdce-dc901ec4dac7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.093785 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crflk\" (UniqueName: \"kubernetes.io/projected/941ea1e9-57d3-4452-bdce-dc901ec4dac7-kube-api-access-crflk\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.093835 4767 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.093851 4767 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.093864 4767 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941ea1e9-57d3-4452-bdce-dc901ec4dac7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.093876 4767 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/941ea1e9-57d3-4452-bdce-dc901ec4dac7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.435511 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.435494 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc94876c4-75r8l" event={"ID":"941ea1e9-57d3-4452-bdce-dc901ec4dac7","Type":"ContainerDied","Data":"281016183e73f69ce142c45ec5ebd6c8a38f87795c7c433a7caa8018471936d3"} Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.435662 4767 scope.go:117] "RemoveContainer" containerID="8dbe51c156325c3719b6452211f10c2043e90363e9d68f61fa9f8b3372962e44" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.437928 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" event={"ID":"92f930b3-7542-41d8-b8c5-1fb7e1fdb08e","Type":"ContainerDied","Data":"d9afd00a033899ea9d2f60b33d31c4b45d1c41a1e17e9984f62c340cea0680f4"} Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.437970 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.459270 4767 scope.go:117] "RemoveContainer" containerID="21a3684c9284d31a69994c299b823ad13ebd9b0dc1af743f5a121c6806279ffa" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.467110 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.472729 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc6cbc658-6jw6j"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.485536 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.489811 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bc94876c4-75r8l"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.738395 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk"] Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.738951 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" containerName="route-controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.738964 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" containerName="route-controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.738977 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" containerName="controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.738983 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" containerName="controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.738994 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739000 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739008 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739017 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739026 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739033 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739048 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739055 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739067 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739074 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="extract-content" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739086 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739094 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739104 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739111 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739120 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739127 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: E0127 15:55:23.739137 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739143 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="extract-utilities" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739240 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" containerName="controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739250 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" containerName="route-controller-manager" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739257 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="84510a56-8f29-404f-b5eb-c7433db1de6b" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739265 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f8f2c5-51fc-4707-903f-fef9c5f133c5" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739275 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3e0a9a-9b2b-4cf4-9f92-847e870be858" containerName="registry-server" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.739698 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.743028 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.744601 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.744601 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.744726 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.745175 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.745352 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.746920 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.747755 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.749723 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.749950 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.750105 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.750166 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.752784 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.753432 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.758029 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.761706 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj"] Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.761850 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.806324 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baed76e9-8788-4fe7-bb86-295c669448e5-serving-cert\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.806841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5dp5\" (UniqueName: \"kubernetes.io/projected/baed76e9-8788-4fe7-bb86-295c669448e5-kube-api-access-z5dp5\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.807019 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-config\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.807085 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-proxy-ca-bundles\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.807156 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-client-ca\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909118 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5dp5\" (UniqueName: \"kubernetes.io/projected/baed76e9-8788-4fe7-bb86-295c669448e5-kube-api-access-z5dp5\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909245 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-config\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909273 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-proxy-ca-bundles\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909329 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-config\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909356 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc526\" (UniqueName: \"kubernetes.io/projected/769e7995-330b-4f27-84d0-7d3a63207686-kube-api-access-hc526\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909402 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-client-ca\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909429 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-client-ca\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909487 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/769e7995-330b-4f27-84d0-7d3a63207686-serving-cert\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.909516 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baed76e9-8788-4fe7-bb86-295c669448e5-serving-cert\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.910428 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-client-ca\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.910556 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-config\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.910602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baed76e9-8788-4fe7-bb86-295c669448e5-proxy-ca-bundles\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.915579 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baed76e9-8788-4fe7-bb86-295c669448e5-serving-cert\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:23 crc kubenswrapper[4767]: I0127 15:55:23.930045 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5dp5\" (UniqueName: \"kubernetes.io/projected/baed76e9-8788-4fe7-bb86-295c669448e5-kube-api-access-z5dp5\") pod \"controller-manager-5ccfb88f5c-j69xk\" (UID: \"baed76e9-8788-4fe7-bb86-295c669448e5\") " pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.010925 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-config\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.010982 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc526\" (UniqueName: \"kubernetes.io/projected/769e7995-330b-4f27-84d0-7d3a63207686-kube-api-access-hc526\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.011012 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-client-ca\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.011045 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/769e7995-330b-4f27-84d0-7d3a63207686-serving-cert\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.012353 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-client-ca\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.012411 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/769e7995-330b-4f27-84d0-7d3a63207686-config\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.015940 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/769e7995-330b-4f27-84d0-7d3a63207686-serving-cert\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.039946 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc526\" (UniqueName: \"kubernetes.io/projected/769e7995-330b-4f27-84d0-7d3a63207686-kube-api-access-hc526\") pod \"route-controller-manager-54d5585bdc-fxjhj\" (UID: \"769e7995-330b-4f27-84d0-7d3a63207686\") " pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.060707 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.072619 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.284466 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk"] Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.349285 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f930b3-7542-41d8-b8c5-1fb7e1fdb08e" path="/var/lib/kubelet/pods/92f930b3-7542-41d8-b8c5-1fb7e1fdb08e/volumes" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.349876 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941ea1e9-57d3-4452-bdce-dc901ec4dac7" path="/var/lib/kubelet/pods/941ea1e9-57d3-4452-bdce-dc901ec4dac7/volumes" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.458572 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" event={"ID":"baed76e9-8788-4fe7-bb86-295c669448e5","Type":"ContainerStarted","Data":"35cf3284f14c7bdacfb19b283938da6479f256d33bb5dac59bbf9fce5ecda7ba"} Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.458647 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" event={"ID":"baed76e9-8788-4fe7-bb86-295c669448e5","Type":"ContainerStarted","Data":"90ef40c898cf671627f1933f4e4648b2d1bab7e0841d6955774213e9bc458cb9"} Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.459150 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.461858 4767 patch_prober.go:28] interesting pod/controller-manager-5ccfb88f5c-j69xk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.461917 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" podUID="baed76e9-8788-4fe7-bb86-295c669448e5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.484221 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" podStartSLOduration=2.484182937 podStartE2EDuration="2.484182937s" podCreationTimestamp="2026-01-27 15:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:55:24.481175728 +0000 UTC m=+346.870193261" watchObservedRunningTime="2026-01-27 15:55:24.484182937 +0000 UTC m=+346.873200460" Jan 27 15:55:24 crc kubenswrapper[4767]: I0127 15:55:24.528041 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj"] Jan 27 15:55:25 crc kubenswrapper[4767]: I0127 15:55:25.467015 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" event={"ID":"769e7995-330b-4f27-84d0-7d3a63207686","Type":"ContainerStarted","Data":"0d7c3e7ba0ea09b0db8209f489a5675413f4a81069608a99a53b8db5002c43e8"} Jan 27 15:55:25 crc kubenswrapper[4767]: I0127 15:55:25.467464 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" event={"ID":"769e7995-330b-4f27-84d0-7d3a63207686","Type":"ContainerStarted","Data":"777e7fdb0ad8a7a99a4da94506f0fb8e89378880de80b891f31ecc86b3d4daee"} Jan 27 15:55:25 crc kubenswrapper[4767]: I0127 15:55:25.472895 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ccfb88f5c-j69xk" Jan 27 15:55:25 crc kubenswrapper[4767]: I0127 15:55:25.496194 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" podStartSLOduration=3.4961677079999998 podStartE2EDuration="3.496167708s" podCreationTimestamp="2026-01-27 15:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:55:25.48982288 +0000 UTC m=+347.878840403" watchObservedRunningTime="2026-01-27 15:55:25.496167708 +0000 UTC m=+347.885185231" Jan 27 15:55:26 crc kubenswrapper[4767]: I0127 15:55:26.473770 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:26 crc kubenswrapper[4767]: I0127 15:55:26.479784 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54d5585bdc-fxjhj" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.133855 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n8n94"] Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.135451 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.150901 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n8n94"] Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310243 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5bc\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-kube-api-access-5w5bc\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310323 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-certificates\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310372 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310416 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-trusted-ca\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310454 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-bound-sa-token\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310608 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-tls\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310712 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef947b49-53e1-4bbe-92f7-299647b9b1cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.310766 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef947b49-53e1-4bbe-92f7-299647b9b1cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.332544 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.412393 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-trusted-ca\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.412923 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-bound-sa-token\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.413300 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-tls\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.413385 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef947b49-53e1-4bbe-92f7-299647b9b1cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.413415 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef947b49-53e1-4bbe-92f7-299647b9b1cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.413490 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5bc\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-kube-api-access-5w5bc\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.413533 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-certificates\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.414463 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ef947b49-53e1-4bbe-92f7-299647b9b1cd-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.415088 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-certificates\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.422469 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-registry-tls\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.432901 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ef947b49-53e1-4bbe-92f7-299647b9b1cd-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.436154 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef947b49-53e1-4bbe-92f7-299647b9b1cd-trusted-ca\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.445321 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-bound-sa-token\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.459115 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5bc\" (UniqueName: \"kubernetes.io/projected/ef947b49-53e1-4bbe-92f7-299647b9b1cd-kube-api-access-5w5bc\") pod \"image-registry-66df7c8f76-n8n94\" (UID: \"ef947b49-53e1-4bbe-92f7-299647b9b1cd\") " pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.459801 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:49 crc kubenswrapper[4767]: I0127 15:55:49.971430 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n8n94"] Jan 27 15:55:50 crc kubenswrapper[4767]: I0127 15:55:50.822622 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" event={"ID":"ef947b49-53e1-4bbe-92f7-299647b9b1cd","Type":"ContainerStarted","Data":"9610ab577b9ddd863ab035adc55b2dc3d8352856996ab18bb73cdad6c51243dd"} Jan 27 15:55:50 crc kubenswrapper[4767]: I0127 15:55:50.824192 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" event={"ID":"ef947b49-53e1-4bbe-92f7-299647b9b1cd","Type":"ContainerStarted","Data":"d2a160cd66dbdecd0aed82aae39c37ff85b99e0d94a58ee9737896450d3c6517"} Jan 27 15:55:50 crc kubenswrapper[4767]: I0127 15:55:50.824279 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:55:54 crc kubenswrapper[4767]: I0127 15:55:54.857451 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:55:54 crc kubenswrapper[4767]: I0127 15:55:54.857978 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:56:09 crc kubenswrapper[4767]: I0127 15:56:09.466779 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" Jan 27 15:56:09 crc kubenswrapper[4767]: I0127 15:56:09.483782 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-n8n94" podStartSLOduration=20.483763313 podStartE2EDuration="20.483763313s" podCreationTimestamp="2026-01-27 15:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:55:50.842265761 +0000 UTC m=+373.231283294" watchObservedRunningTime="2026-01-27 15:56:09.483763313 +0000 UTC m=+391.872780836" Jan 27 15:56:09 crc kubenswrapper[4767]: I0127 15:56:09.544566 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:56:24 crc kubenswrapper[4767]: I0127 15:56:24.858322 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:56:24 crc kubenswrapper[4767]: I0127 15:56:24.858929 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:56:34 crc kubenswrapper[4767]: I0127 15:56:34.582363 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerName="registry" containerID="cri-o://6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1" gracePeriod=30 Jan 27 15:56:34 crc kubenswrapper[4767]: I0127 15:56:34.996038 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.060516 4767 generic.go:334] "Generic (PLEG): container finished" podID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerID="6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1" exitCode=0 Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.060584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" event={"ID":"5c067093-6c7c-47fb-bcc6-d50bba65fe78","Type":"ContainerDied","Data":"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1"} Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.060609 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.060633 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" event={"ID":"5c067093-6c7c-47fb-bcc6-d50bba65fe78","Type":"ContainerDied","Data":"f50a8c385aa358ca0b45e567c3e3cdf04ade8f8a11e8d2dcb072bf4f778d2cbb"} Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.060662 4767 scope.go:117] "RemoveContainer" containerID="6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.082712 4767 scope.go:117] "RemoveContainer" containerID="6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1" Jan 27 15:56:35 crc kubenswrapper[4767]: E0127 15:56:35.083164 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1\": container with ID starting with 6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1 not found: ID does not exist" containerID="6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.083224 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1"} err="failed to get container status \"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1\": rpc error: code = NotFound desc = could not find container \"6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1\": container with ID starting with 6ec0446f5147e7cc7d185e377aeb81dd52e270baffc5ec9045c2791afd3637f1 not found: ID does not exist" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.137507 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.137806 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.137949 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.138020 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j28q\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.138194 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.138274 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.138329 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.138380 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets\") pod \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\" (UID: \"5c067093-6c7c-47fb-bcc6-d50bba65fe78\") " Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.139421 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.139777 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.143760 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.144017 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.144238 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.144378 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q" (OuterVolumeSpecName: "kube-api-access-2j28q") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "kube-api-access-2j28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.147121 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.160343 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5c067093-6c7c-47fb-bcc6-d50bba65fe78" (UID: "5c067093-6c7c-47fb-bcc6-d50bba65fe78"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240587 4767 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c067093-6c7c-47fb-bcc6-d50bba65fe78-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240642 4767 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240664 4767 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c067093-6c7c-47fb-bcc6-d50bba65fe78-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240681 4767 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240791 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j28q\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-kube-api-access-2j28q\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240810 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c067093-6c7c-47fb-bcc6-d50bba65fe78-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.240826 4767 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c067093-6c7c-47fb-bcc6-d50bba65fe78-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.394828 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:56:35 crc kubenswrapper[4767]: I0127 15:56:35.398150 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f4kgp"] Jan 27 15:56:36 crc kubenswrapper[4767]: I0127 15:56:36.333131 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" path="/var/lib/kubelet/pods/5c067093-6c7c-47fb-bcc6-d50bba65fe78/volumes" Jan 27 15:56:39 crc kubenswrapper[4767]: I0127 15:56:39.902661 4767 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-f4kgp container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.30:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:56:39 crc kubenswrapper[4767]: I0127 15:56:39.903364 4767 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-f4kgp" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.30:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.205139 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.208671 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lbhhq" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="registry-server" containerID="cri-o://ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f" gracePeriod=30 Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.238600 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.238936 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6v8jc" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="registry-server" containerID="cri-o://04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa" gracePeriod=30 Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.243244 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.243521 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" containerID="cri-o://4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342" gracePeriod=30 Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.256649 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.256915 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pz42" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="registry-server" containerID="cri-o://59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" gracePeriod=30 Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.272026 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wblwm"] Jan 27 15:56:43 crc kubenswrapper[4767]: E0127 15:56:43.285431 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerName="registry" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.285704 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerName="registry" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.286039 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c067093-6c7c-47fb-bcc6-d50bba65fe78" containerName="registry" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.288285 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.288314 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wblwm"] Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.288497 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bnmj9" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="registry-server" containerID="cri-o://cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b" gracePeriod=30 Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.288860 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.340699 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz22h\" (UniqueName: \"kubernetes.io/projected/a342ddeb-bdff-452a-966d-5460a1c5f924-kube-api-access-sz22h\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.340843 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.340962 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.443366 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.443465 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz22h\" (UniqueName: \"kubernetes.io/projected/a342ddeb-bdff-452a-966d-5460a1c5f924-kube-api-access-sz22h\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.443514 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.445337 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.456633 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a342ddeb-bdff-452a-966d-5460a1c5f924-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.470794 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz22h\" (UniqueName: \"kubernetes.io/projected/a342ddeb-bdff-452a-966d-5460a1c5f924-kube-api-access-sz22h\") pod \"marketplace-operator-79b997595-wblwm\" (UID: \"a342ddeb-bdff-452a-966d-5460a1c5f924\") " pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: E0127 15:56:43.582979 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f is running failed: container process not found" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:56:43 crc kubenswrapper[4767]: E0127 15:56:43.583410 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f is running failed: container process not found" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:56:43 crc kubenswrapper[4767]: E0127 15:56:43.583802 4767 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f is running failed: container process not found" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 15:56:43 crc kubenswrapper[4767]: E0127 15:56:43.583862 4767 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6pz42" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="registry-server" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.666507 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.670032 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.712736 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.742020 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.747398 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdfj\" (UniqueName: \"kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj\") pod \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.749497 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities\") pod \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.749729 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content\") pod \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\" (UID: \"5f897714-8bcf-4ec4-8be0-86dfb0fc4785\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.750409 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities" (OuterVolumeSpecName: "utilities") pod "5f897714-8bcf-4ec4-8be0-86dfb0fc4785" (UID: "5f897714-8bcf-4ec4-8be0-86dfb0fc4785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.752414 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj" (OuterVolumeSpecName: "kube-api-access-7pdfj") pod "5f897714-8bcf-4ec4-8be0-86dfb0fc4785" (UID: "5f897714-8bcf-4ec4-8be0-86dfb0fc4785"). InnerVolumeSpecName "kube-api-access-7pdfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.775105 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.808876 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f897714-8bcf-4ec4-8be0-86dfb0fc4785" (UID: "5f897714-8bcf-4ec4-8be0-86dfb0fc4785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851134 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities\") pod \"53c82776-5f8d-496e-a045-428e96b9f87c\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851218 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics\") pod \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851243 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content\") pod \"53c82776-5f8d-496e-a045-428e96b9f87c\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851266 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities\") pod \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851309 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca\") pod \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851345 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldlg5\" (UniqueName: \"kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5\") pod \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851377 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48f59\" (UniqueName: \"kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59\") pod \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\" (UID: \"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851409 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content\") pod \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\" (UID: \"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851478 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmz2\" (UniqueName: \"kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2\") pod \"53c82776-5f8d-496e-a045-428e96b9f87c\" (UID: \"53c82776-5f8d-496e-a045-428e96b9f87c\") " Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851701 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851724 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.851738 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdfj\" (UniqueName: \"kubernetes.io/projected/5f897714-8bcf-4ec4-8be0-86dfb0fc4785-kube-api-access-7pdfj\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.853616 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities" (OuterVolumeSpecName: "utilities") pod "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" (UID: "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.854283 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities" (OuterVolumeSpecName: "utilities") pod "53c82776-5f8d-496e-a045-428e96b9f87c" (UID: "53c82776-5f8d-496e-a045-428e96b9f87c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.856417 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" (UID: "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.858432 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59" (OuterVolumeSpecName: "kube-api-access-48f59") pod "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" (UID: "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0"). InnerVolumeSpecName "kube-api-access-48f59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.858555 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" (UID: "aa6ea9de-9f71-4d6e-8304-536ccfaeaec0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.858859 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5" (OuterVolumeSpecName: "kube-api-access-ldlg5") pod "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" (UID: "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d"). InnerVolumeSpecName "kube-api-access-ldlg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.861769 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2" (OuterVolumeSpecName: "kube-api-access-5tmz2") pod "53c82776-5f8d-496e-a045-428e96b9f87c" (UID: "53c82776-5f8d-496e-a045-428e96b9f87c"). InnerVolumeSpecName "kube-api-access-5tmz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.881418 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53c82776-5f8d-496e-a045-428e96b9f87c" (UID: "53c82776-5f8d-496e-a045-428e96b9f87c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.920435 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953083 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953123 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953134 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c82776-5f8d-496e-a045-428e96b9f87c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953142 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953150 4767 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953161 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldlg5\" (UniqueName: \"kubernetes.io/projected/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-kube-api-access-ldlg5\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953171 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48f59\" (UniqueName: \"kubernetes.io/projected/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0-kube-api-access-48f59\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.953180 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tmz2\" (UniqueName: \"kubernetes.io/projected/53c82776-5f8d-496e-a045-428e96b9f87c-kube-api-access-5tmz2\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:43 crc kubenswrapper[4767]: I0127 15:56:43.982674 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" (UID: "69b7edc7-f8c2-4e0e-923c-b5a3395ae14d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.054761 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities\") pod \"b45a028d-9f8c-4090-985b-e7ddf929554c\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.054866 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4kk\" (UniqueName: \"kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk\") pod \"b45a028d-9f8c-4090-985b-e7ddf929554c\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.054904 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content\") pod \"b45a028d-9f8c-4090-985b-e7ddf929554c\" (UID: \"b45a028d-9f8c-4090-985b-e7ddf929554c\") " Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.055166 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.056307 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities" (OuterVolumeSpecName: "utilities") pod "b45a028d-9f8c-4090-985b-e7ddf929554c" (UID: "b45a028d-9f8c-4090-985b-e7ddf929554c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.057726 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk" (OuterVolumeSpecName: "kube-api-access-ms4kk") pod "b45a028d-9f8c-4090-985b-e7ddf929554c" (UID: "b45a028d-9f8c-4090-985b-e7ddf929554c"). InnerVolumeSpecName "kube-api-access-ms4kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.100928 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b45a028d-9f8c-4090-985b-e7ddf929554c" (UID: "b45a028d-9f8c-4090-985b-e7ddf929554c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.118470 4767 generic.go:334] "Generic (PLEG): container finished" podID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerID="04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa" exitCode=0 Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.118569 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerDied","Data":"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.118580 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6v8jc" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.118613 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6v8jc" event={"ID":"b45a028d-9f8c-4090-985b-e7ddf929554c","Type":"ContainerDied","Data":"faf0e1f3c7c9b040b4709e3b739f7b82d8ef980792b1a6bfeef73bd6f101f689"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.118655 4767 scope.go:117] "RemoveContainer" containerID="04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.123957 4767 generic.go:334] "Generic (PLEG): container finished" podID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerID="cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b" exitCode=0 Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.124062 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerDied","Data":"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.124109 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bnmj9" event={"ID":"69b7edc7-f8c2-4e0e-923c-b5a3395ae14d","Type":"ContainerDied","Data":"0e2e45e64e8e2596ccc78c7ca6d94a21d6f194da9c3c2b01814203763881fef8"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.124252 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bnmj9" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.126235 4767 generic.go:334] "Generic (PLEG): container finished" podID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerID="4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342" exitCode=0 Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.126268 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" event={"ID":"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0","Type":"ContainerDied","Data":"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.126303 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" event={"ID":"aa6ea9de-9f71-4d6e-8304-536ccfaeaec0","Type":"ContainerDied","Data":"a79d1795c9cf5f608126ae01b7e4dc4e607d07ad939724d29f091b4c6e7b39fb"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.126248 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cbltv" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.129956 4767 generic.go:334] "Generic (PLEG): container finished" podID="53c82776-5f8d-496e-a045-428e96b9f87c" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" exitCode=0 Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.130362 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerDied","Data":"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.130392 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pz42" event={"ID":"53c82776-5f8d-496e-a045-428e96b9f87c","Type":"ContainerDied","Data":"c6212406b850756b2b6613f66dd05f5e4a4b5de51d2b71b5e0124b3288e999c8"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.130463 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pz42" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.138988 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wblwm"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.140576 4767 generic.go:334] "Generic (PLEG): container finished" podID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerID="ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f" exitCode=0 Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.140667 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerDied","Data":"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.140706 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lbhhq" event={"ID":"5f897714-8bcf-4ec4-8be0-86dfb0fc4785","Type":"ContainerDied","Data":"2d7ad657e944ff882ae60befe785254dea482a6b19ae2658395b07c5f79bf371"} Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.140800 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lbhhq" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.143897 4767 scope.go:117] "RemoveContainer" containerID="e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.156801 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms4kk\" (UniqueName: \"kubernetes.io/projected/b45a028d-9f8c-4090-985b-e7ddf929554c-kube-api-access-ms4kk\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.156831 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.156842 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b45a028d-9f8c-4090-985b-e7ddf929554c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.164357 4767 scope.go:117] "RemoveContainer" containerID="ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.170430 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.178561 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6v8jc"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.185993 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.189668 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bnmj9"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.198314 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.198340 4767 scope.go:117] "RemoveContainer" containerID="04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.198870 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa\": container with ID starting with 04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa not found: ID does not exist" containerID="04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.198909 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa"} err="failed to get container status \"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa\": rpc error: code = NotFound desc = could not find container \"04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa\": container with ID starting with 04b0182824f7aa7a90b3bdecdb45db36eb38df9c63eeb5e24f6c5bcbf7506baa not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.198935 4767 scope.go:117] "RemoveContainer" containerID="e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.199333 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd\": container with ID starting with e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd not found: ID does not exist" containerID="e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.199377 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd"} err="failed to get container status \"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd\": rpc error: code = NotFound desc = could not find container \"e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd\": container with ID starting with e86bfa3c41b806871ee62fd82a757a0d49d31d4274d2ed80212e745e29107dcd not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.199407 4767 scope.go:117] "RemoveContainer" containerID="ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.199707 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c\": container with ID starting with ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c not found: ID does not exist" containerID="ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.199739 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c"} err="failed to get container status \"ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c\": rpc error: code = NotFound desc = could not find container \"ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c\": container with ID starting with ade3366287947227c09279d05b7302046c6a4fda81a219f2dd2db8940d5c893c not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.199794 4767 scope.go:117] "RemoveContainer" containerID="cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.202743 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cbltv"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.213115 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.213181 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pz42"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.238890 4767 scope.go:117] "RemoveContainer" containerID="999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.264551 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.266716 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lbhhq"] Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.273497 4767 scope.go:117] "RemoveContainer" containerID="7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.294103 4767 scope.go:117] "RemoveContainer" containerID="cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.294587 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b\": container with ID starting with cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b not found: ID does not exist" containerID="cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.294626 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b"} err="failed to get container status \"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b\": rpc error: code = NotFound desc = could not find container \"cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b\": container with ID starting with cbcadfce1206cdd1288ed5c5b76cd13a8a2a7a468a1f4252d0c7e5aa74828f0b not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.294652 4767 scope.go:117] "RemoveContainer" containerID="999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.295054 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7\": container with ID starting with 999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7 not found: ID does not exist" containerID="999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.295098 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7"} err="failed to get container status \"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7\": rpc error: code = NotFound desc = could not find container \"999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7\": container with ID starting with 999fa9e1ada0aaf3a0252676536b85f72bf27340fe5f2034e990393f4ed973d7 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.295126 4767 scope.go:117] "RemoveContainer" containerID="7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.295445 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129\": container with ID starting with 7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129 not found: ID does not exist" containerID="7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.295468 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129"} err="failed to get container status \"7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129\": rpc error: code = NotFound desc = could not find container \"7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129\": container with ID starting with 7bc09270d8c041f980a2b64aca174d9a90c24b81755a8645b02d9574ef95a129 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.295481 4767 scope.go:117] "RemoveContainer" containerID="4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.312963 4767 scope.go:117] "RemoveContainer" containerID="4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.313370 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342\": container with ID starting with 4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342 not found: ID does not exist" containerID="4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.313408 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342"} err="failed to get container status \"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342\": rpc error: code = NotFound desc = could not find container \"4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342\": container with ID starting with 4d5ce3e3a0d4ca7030f04393cda9015d5d78b5399426efad55fe800edb974342 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.313433 4767 scope.go:117] "RemoveContainer" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.334092 4767 scope.go:117] "RemoveContainer" containerID="d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.334600 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" path="/var/lib/kubelet/pods/53c82776-5f8d-496e-a045-428e96b9f87c/volumes" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.335564 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" path="/var/lib/kubelet/pods/5f897714-8bcf-4ec4-8be0-86dfb0fc4785/volumes" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.336261 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" path="/var/lib/kubelet/pods/69b7edc7-f8c2-4e0e-923c-b5a3395ae14d/volumes" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.337436 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" path="/var/lib/kubelet/pods/aa6ea9de-9f71-4d6e-8304-536ccfaeaec0/volumes" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.337932 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" path="/var/lib/kubelet/pods/b45a028d-9f8c-4090-985b-e7ddf929554c/volumes" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.378089 4767 scope.go:117] "RemoveContainer" containerID="09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.391357 4767 scope.go:117] "RemoveContainer" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.393013 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f\": container with ID starting with 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f not found: ID does not exist" containerID="59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393068 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f"} err="failed to get container status \"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f\": rpc error: code = NotFound desc = could not find container \"59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f\": container with ID starting with 59d0912f63ec16d345984a14f6cb7c79b01bb83df7442074548e3270e315a67f not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393103 4767 scope.go:117] "RemoveContainer" containerID="d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.393494 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066\": container with ID starting with d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066 not found: ID does not exist" containerID="d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393543 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066"} err="failed to get container status \"d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066\": rpc error: code = NotFound desc = could not find container \"d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066\": container with ID starting with d202bf13c77465ad119c6b43d1c396f1e04a804f985c1d5dc346a84d07e80066 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393579 4767 scope.go:117] "RemoveContainer" containerID="09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.393915 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4\": container with ID starting with 09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4 not found: ID does not exist" containerID="09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393950 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4"} err="failed to get container status \"09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4\": rpc error: code = NotFound desc = could not find container \"09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4\": container with ID starting with 09957739a34903519ce6316c22131659e4fdd1c1ec7b5af225191d7906e766a4 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.393969 4767 scope.go:117] "RemoveContainer" containerID="ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.406815 4767 scope.go:117] "RemoveContainer" containerID="9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.419456 4767 scope.go:117] "RemoveContainer" containerID="88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.432945 4767 scope.go:117] "RemoveContainer" containerID="ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.433604 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f\": container with ID starting with ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f not found: ID does not exist" containerID="ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.433645 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f"} err="failed to get container status \"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f\": rpc error: code = NotFound desc = could not find container \"ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f\": container with ID starting with ca881660aed89370d466371a63e4a82c4e497d29a17ba25a663198b8ebd6e93f not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.433674 4767 scope.go:117] "RemoveContainer" containerID="9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.434039 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2\": container with ID starting with 9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2 not found: ID does not exist" containerID="9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.434071 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2"} err="failed to get container status \"9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2\": rpc error: code = NotFound desc = could not find container \"9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2\": container with ID starting with 9db9a243c6eccc7e88138f8f0ea2201fc59d7f2b715467dbf2238fd32db0bfe2 not found: ID does not exist" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.434090 4767 scope.go:117] "RemoveContainer" containerID="88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f" Jan 27 15:56:44 crc kubenswrapper[4767]: E0127 15:56:44.434456 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f\": container with ID starting with 88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f not found: ID does not exist" containerID="88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f" Jan 27 15:56:44 crc kubenswrapper[4767]: I0127 15:56:44.434495 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f"} err="failed to get container status \"88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f\": rpc error: code = NotFound desc = could not find container \"88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f\": container with ID starting with 88685c0b64213866a9b4483f8318390e22fecf2c8e0d06854509e6f8e56d3a1f not found: ID does not exist" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.148546 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" event={"ID":"a342ddeb-bdff-452a-966d-5460a1c5f924","Type":"ContainerStarted","Data":"e0afd4624d33e2cf1c9de48b06354a55873bbf5629cc3678722b2a6131ecca35"} Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.149028 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.149055 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" event={"ID":"a342ddeb-bdff-452a-966d-5460a1c5f924","Type":"ContainerStarted","Data":"bf4dc39774d39d4e0479e0db9ad7c6b3f97f195b33e531ad2a43dca3384db5b2"} Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.152502 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.169724 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wblwm" podStartSLOduration=2.16969844 podStartE2EDuration="2.16969844s" podCreationTimestamp="2026-01-27 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:56:45.164189876 +0000 UTC m=+427.553207429" watchObservedRunningTime="2026-01-27 15:56:45.16969844 +0000 UTC m=+427.558715963" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.428961 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pp2kc"] Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429234 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429252 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429272 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429282 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429296 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429307 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429325 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429333 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429344 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429352 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429364 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429373 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429384 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429392 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429402 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429410 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429421 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429430 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="extract-content" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429442 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429450 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429462 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429470 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429486 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429494 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: E0127 15:56:45.429504 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429512 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="extract-utilities" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429617 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c82776-5f8d-496e-a045-428e96b9f87c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429628 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f897714-8bcf-4ec4-8be0-86dfb0fc4785" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429647 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45a028d-9f8c-4090-985b-e7ddf929554c" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429657 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b7edc7-f8c2-4e0e-923c-b5a3395ae14d" containerName="registry-server" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.429667 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa6ea9de-9f71-4d6e-8304-536ccfaeaec0" containerName="marketplace-operator" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.430698 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.436538 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.438016 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pp2kc"] Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.576053 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9d9\" (UniqueName: \"kubernetes.io/projected/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-kube-api-access-tw9d9\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.576173 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-catalog-content\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.576219 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-utilities\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.623753 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-96zhx"] Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.628737 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.631659 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.640727 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96zhx"] Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.677825 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-catalog-content\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.677949 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-utilities\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.678048 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw9d9\" (UniqueName: \"kubernetes.io/projected/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-kube-api-access-tw9d9\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.678519 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-catalog-content\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.678822 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-utilities\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.698796 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw9d9\" (UniqueName: \"kubernetes.io/projected/6e9e5c7b-5521-4815-9f8d-8de92c9fce65-kube-api-access-tw9d9\") pod \"certified-operators-pp2kc\" (UID: \"6e9e5c7b-5521-4815-9f8d-8de92c9fce65\") " pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.754343 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.779178 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-catalog-content\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.779411 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zrn\" (UniqueName: \"kubernetes.io/projected/a3c62726-f5dc-452a-9284-63a4d82ba2c4-kube-api-access-h4zrn\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.779500 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-utilities\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.880457 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zrn\" (UniqueName: \"kubernetes.io/projected/a3c62726-f5dc-452a-9284-63a4d82ba2c4-kube-api-access-h4zrn\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.880808 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-utilities\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.880831 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-catalog-content\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.881346 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-utilities\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.881396 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c62726-f5dc-452a-9284-63a4d82ba2c4-catalog-content\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.899272 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zrn\" (UniqueName: \"kubernetes.io/projected/a3c62726-f5dc-452a-9284-63a4d82ba2c4-kube-api-access-h4zrn\") pod \"redhat-marketplace-96zhx\" (UID: \"a3c62726-f5dc-452a-9284-63a4d82ba2c4\") " pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:45 crc kubenswrapper[4767]: I0127 15:56:45.947300 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:46 crc kubenswrapper[4767]: I0127 15:56:46.186778 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pp2kc"] Jan 27 15:56:46 crc kubenswrapper[4767]: W0127 15:56:46.193566 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9e5c7b_5521_4815_9f8d_8de92c9fce65.slice/crio-18b072ffd9d7fb80edccbc75c8c990b998bb072b65cb5fab277afa72ec3ba2d4 WatchSource:0}: Error finding container 18b072ffd9d7fb80edccbc75c8c990b998bb072b65cb5fab277afa72ec3ba2d4: Status 404 returned error can't find the container with id 18b072ffd9d7fb80edccbc75c8c990b998bb072b65cb5fab277afa72ec3ba2d4 Jan 27 15:56:46 crc kubenswrapper[4767]: I0127 15:56:46.321733 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96zhx"] Jan 27 15:56:46 crc kubenswrapper[4767]: W0127 15:56:46.329932 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c62726_f5dc_452a_9284_63a4d82ba2c4.slice/crio-a3bb16766f3b84667c76012f19d39178807ab9ca5453b366fbae3b87261e425b WatchSource:0}: Error finding container a3bb16766f3b84667c76012f19d39178807ab9ca5453b366fbae3b87261e425b: Status 404 returned error can't find the container with id a3bb16766f3b84667c76012f19d39178807ab9ca5453b366fbae3b87261e425b Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.165268 4767 generic.go:334] "Generic (PLEG): container finished" podID="6e9e5c7b-5521-4815-9f8d-8de92c9fce65" containerID="51c1b07e554d48b1c80a42ad06605ab81b8a5b207ef807d734103c489794b43a" exitCode=0 Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.165323 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pp2kc" event={"ID":"6e9e5c7b-5521-4815-9f8d-8de92c9fce65","Type":"ContainerDied","Data":"51c1b07e554d48b1c80a42ad06605ab81b8a5b207ef807d734103c489794b43a"} Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.165656 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pp2kc" event={"ID":"6e9e5c7b-5521-4815-9f8d-8de92c9fce65","Type":"ContainerStarted","Data":"18b072ffd9d7fb80edccbc75c8c990b998bb072b65cb5fab277afa72ec3ba2d4"} Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.168172 4767 generic.go:334] "Generic (PLEG): container finished" podID="a3c62726-f5dc-452a-9284-63a4d82ba2c4" containerID="b21b039abd685860c28dbf85fe134b87335baf7f28dbfaebae444f600f374541" exitCode=0 Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.168240 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96zhx" event={"ID":"a3c62726-f5dc-452a-9284-63a4d82ba2c4","Type":"ContainerDied","Data":"b21b039abd685860c28dbf85fe134b87335baf7f28dbfaebae444f600f374541"} Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.168273 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96zhx" event={"ID":"a3c62726-f5dc-452a-9284-63a4d82ba2c4","Type":"ContainerStarted","Data":"a3bb16766f3b84667c76012f19d39178807ab9ca5453b366fbae3b87261e425b"} Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.820438 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x8k6k"] Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.823343 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.825937 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.834015 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8k6k"] Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.904300 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hql2q\" (UniqueName: \"kubernetes.io/projected/0d786c99-0af9-45d4-af0f-2568df55af59-kube-api-access-hql2q\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.904360 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-utilities\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:47 crc kubenswrapper[4767]: I0127 15:56:47.904418 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-catalog-content\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.005676 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-utilities\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.006193 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-catalog-content\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.006290 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hql2q\" (UniqueName: \"kubernetes.io/projected/0d786c99-0af9-45d4-af0f-2568df55af59-kube-api-access-hql2q\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.006343 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-utilities\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.006665 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d786c99-0af9-45d4-af0f-2568df55af59-catalog-content\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.021949 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.023127 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.026770 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.035288 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hql2q\" (UniqueName: \"kubernetes.io/projected/0d786c99-0af9-45d4-af0f-2568df55af59-kube-api-access-hql2q\") pod \"redhat-operators-x8k6k\" (UID: \"0d786c99-0af9-45d4-af0f-2568df55af59\") " pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.038031 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.107332 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r82l\" (UniqueName: \"kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.107401 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.107427 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.145515 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.176508 4767 generic.go:334] "Generic (PLEG): container finished" podID="a3c62726-f5dc-452a-9284-63a4d82ba2c4" containerID="45994bb95282faf4c1a9eb1eba41d2097a16523be3b00ac262795a9ae2a5c801" exitCode=0 Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.176559 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96zhx" event={"ID":"a3c62726-f5dc-452a-9284-63a4d82ba2c4","Type":"ContainerDied","Data":"45994bb95282faf4c1a9eb1eba41d2097a16523be3b00ac262795a9ae2a5c801"} Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.208850 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.208902 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.208958 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r82l\" (UniqueName: \"kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.209367 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.209411 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.228735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r82l\" (UniqueName: \"kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l\") pod \"community-operators-f8p8n\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.367122 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.533844 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8k6k"] Jan 27 15:56:48 crc kubenswrapper[4767]: W0127 15:56:48.547331 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d786c99_0af9_45d4_af0f_2568df55af59.slice/crio-00f6b1b6b6240f0465e32f009b07bd002ffb01f4d70f62f1785b3e99034a7bd1 WatchSource:0}: Error finding container 00f6b1b6b6240f0465e32f009b07bd002ffb01f4d70f62f1785b3e99034a7bd1: Status 404 returned error can't find the container with id 00f6b1b6b6240f0465e32f009b07bd002ffb01f4d70f62f1785b3e99034a7bd1 Jan 27 15:56:48 crc kubenswrapper[4767]: I0127 15:56:48.754956 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 15:56:48 crc kubenswrapper[4767]: W0127 15:56:48.771796 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d760573_73ce_45c4_bb6c_bb7fad22d7b3.slice/crio-a4bc594e63684d5d6771f29917ec1dbe5a186b1f50230b7d496a248173669c30 WatchSource:0}: Error finding container a4bc594e63684d5d6771f29917ec1dbe5a186b1f50230b7d496a248173669c30: Status 404 returned error can't find the container with id a4bc594e63684d5d6771f29917ec1dbe5a186b1f50230b7d496a248173669c30 Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.183228 4767 generic.go:334] "Generic (PLEG): container finished" podID="6e9e5c7b-5521-4815-9f8d-8de92c9fce65" containerID="2083f130383dc12fb9a943b00d082181739a70d947cff5b01ffff99353346acb" exitCode=0 Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.183293 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pp2kc" event={"ID":"6e9e5c7b-5521-4815-9f8d-8de92c9fce65","Type":"ContainerDied","Data":"2083f130383dc12fb9a943b00d082181739a70d947cff5b01ffff99353346acb"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.185439 4767 generic.go:334] "Generic (PLEG): container finished" podID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerID="14c69f0b361f371de84c63e7d39079b08488f3411bc9b451f7dd4ae023898076" exitCode=0 Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.185546 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerDied","Data":"14c69f0b361f371de84c63e7d39079b08488f3411bc9b451f7dd4ae023898076"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.185658 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerStarted","Data":"a4bc594e63684d5d6771f29917ec1dbe5a186b1f50230b7d496a248173669c30"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.188159 4767 generic.go:334] "Generic (PLEG): container finished" podID="0d786c99-0af9-45d4-af0f-2568df55af59" containerID="51496ac06c08a08f6396ec0e2e66f9c64317e242dfde25ba9bcebdca734b499e" exitCode=0 Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.188584 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8k6k" event={"ID":"0d786c99-0af9-45d4-af0f-2568df55af59","Type":"ContainerDied","Data":"51496ac06c08a08f6396ec0e2e66f9c64317e242dfde25ba9bcebdca734b499e"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.188608 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8k6k" event={"ID":"0d786c99-0af9-45d4-af0f-2568df55af59","Type":"ContainerStarted","Data":"00f6b1b6b6240f0465e32f009b07bd002ffb01f4d70f62f1785b3e99034a7bd1"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.192211 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96zhx" event={"ID":"a3c62726-f5dc-452a-9284-63a4d82ba2c4","Type":"ContainerStarted","Data":"bf8a88d65c88b000d019b24f1b088e15b51f06093506f33ffa23d77f21d72c2a"} Jan 27 15:56:49 crc kubenswrapper[4767]: I0127 15:56:49.222120 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-96zhx" podStartSLOduration=2.793468118 podStartE2EDuration="4.222105279s" podCreationTimestamp="2026-01-27 15:56:45 +0000 UTC" firstStartedPulling="2026-01-27 15:56:47.169387934 +0000 UTC m=+429.558405457" lastFinishedPulling="2026-01-27 15:56:48.598025095 +0000 UTC m=+430.987042618" observedRunningTime="2026-01-27 15:56:49.220976986 +0000 UTC m=+431.609994519" watchObservedRunningTime="2026-01-27 15:56:49.222105279 +0000 UTC m=+431.611122792" Jan 27 15:56:50 crc kubenswrapper[4767]: I0127 15:56:50.200359 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerStarted","Data":"1edb957e43cb2f1e05f1bcb93e21e3e142f10804f68a6249c457d7d2acc03306"} Jan 27 15:56:50 crc kubenswrapper[4767]: I0127 15:56:50.204115 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8k6k" event={"ID":"0d786c99-0af9-45d4-af0f-2568df55af59","Type":"ContainerStarted","Data":"7afda90f8ceac1828f1ae98402e1ecbccd1465a166ed618d4ce2b6a46b7fc6cb"} Jan 27 15:56:50 crc kubenswrapper[4767]: I0127 15:56:50.213754 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pp2kc" event={"ID":"6e9e5c7b-5521-4815-9f8d-8de92c9fce65","Type":"ContainerStarted","Data":"82b8dd19f2b4d9ba9195ee0f8ee84c8b0b93e5eea1c88db2287ad36dd659661b"} Jan 27 15:56:50 crc kubenswrapper[4767]: I0127 15:56:50.243620 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pp2kc" podStartSLOduration=2.833582874 podStartE2EDuration="5.243594887s" podCreationTimestamp="2026-01-27 15:56:45 +0000 UTC" firstStartedPulling="2026-01-27 15:56:47.168100205 +0000 UTC m=+429.557117728" lastFinishedPulling="2026-01-27 15:56:49.578112198 +0000 UTC m=+431.967129741" observedRunningTime="2026-01-27 15:56:50.239398232 +0000 UTC m=+432.628415755" watchObservedRunningTime="2026-01-27 15:56:50.243594887 +0000 UTC m=+432.632612410" Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.220720 4767 generic.go:334] "Generic (PLEG): container finished" podID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerID="1edb957e43cb2f1e05f1bcb93e21e3e142f10804f68a6249c457d7d2acc03306" exitCode=0 Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.220912 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerDied","Data":"1edb957e43cb2f1e05f1bcb93e21e3e142f10804f68a6249c457d7d2acc03306"} Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.221151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerStarted","Data":"722337b744114a3aca9547934e467fb44d99773173d98b30654ee41bfff329f8"} Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.223765 4767 generic.go:334] "Generic (PLEG): container finished" podID="0d786c99-0af9-45d4-af0f-2568df55af59" containerID="7afda90f8ceac1828f1ae98402e1ecbccd1465a166ed618d4ce2b6a46b7fc6cb" exitCode=0 Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.223800 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8k6k" event={"ID":"0d786c99-0af9-45d4-af0f-2568df55af59","Type":"ContainerDied","Data":"7afda90f8ceac1828f1ae98402e1ecbccd1465a166ed618d4ce2b6a46b7fc6cb"} Jan 27 15:56:51 crc kubenswrapper[4767]: I0127 15:56:51.242885 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f8p8n" podStartSLOduration=1.815592461 podStartE2EDuration="3.242865141s" podCreationTimestamp="2026-01-27 15:56:48 +0000 UTC" firstStartedPulling="2026-01-27 15:56:49.186739464 +0000 UTC m=+431.575756987" lastFinishedPulling="2026-01-27 15:56:50.614012144 +0000 UTC m=+433.003029667" observedRunningTime="2026-01-27 15:56:51.239299054 +0000 UTC m=+433.628316587" watchObservedRunningTime="2026-01-27 15:56:51.242865141 +0000 UTC m=+433.631882664" Jan 27 15:56:52 crc kubenswrapper[4767]: I0127 15:56:52.230690 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8k6k" event={"ID":"0d786c99-0af9-45d4-af0f-2568df55af59","Type":"ContainerStarted","Data":"15f8885e31e30d53385f321ead788e6d02380fd94bf82d3499d303567e121176"} Jan 27 15:56:52 crc kubenswrapper[4767]: I0127 15:56:52.253332 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x8k6k" podStartSLOduration=2.824651111 podStartE2EDuration="5.253210226s" podCreationTimestamp="2026-01-27 15:56:47 +0000 UTC" firstStartedPulling="2026-01-27 15:56:49.189988091 +0000 UTC m=+431.579005614" lastFinishedPulling="2026-01-27 15:56:51.618547206 +0000 UTC m=+434.007564729" observedRunningTime="2026-01-27 15:56:52.248678481 +0000 UTC m=+434.637696014" watchObservedRunningTime="2026-01-27 15:56:52.253210226 +0000 UTC m=+434.642227759" Jan 27 15:56:54 crc kubenswrapper[4767]: I0127 15:56:54.858252 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:56:54 crc kubenswrapper[4767]: I0127 15:56:54.858795 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:56:54 crc kubenswrapper[4767]: I0127 15:56:54.858850 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 15:56:54 crc kubenswrapper[4767]: I0127 15:56:54.859524 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:56:54 crc kubenswrapper[4767]: I0127 15:56:54.859593 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53" gracePeriod=600 Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.246232 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53" exitCode=0 Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.246307 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53"} Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.246632 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5"} Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.246655 4767 scope.go:117] "RemoveContainer" containerID="21d40a73dfb9ca034fd8cf554f1d216546f248cfb6a917464c43bc4f15d0546a" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.754577 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.755011 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.798696 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.947952 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.948114 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:55 crc kubenswrapper[4767]: I0127 15:56:55.991091 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:56 crc kubenswrapper[4767]: I0127 15:56:56.301044 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pp2kc" Jan 27 15:56:56 crc kubenswrapper[4767]: I0127 15:56:56.306673 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-96zhx" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.145736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.146021 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.193296 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.307175 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x8k6k" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.367413 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.367495 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:58 crc kubenswrapper[4767]: I0127 15:56:58.405162 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:56:59 crc kubenswrapper[4767]: I0127 15:56:59.317356 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 15:58:41 crc kubenswrapper[4767]: I0127 15:58:41.708999 4767 scope.go:117] "RemoveContainer" containerID="26198e480ae52e3c31055d523eee5ce991004cd80a99480be6c5e5b9fd089f55" Jan 27 15:59:24 crc kubenswrapper[4767]: I0127 15:59:24.858256 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:59:24 crc kubenswrapper[4767]: I0127 15:59:24.858849 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:59:54 crc kubenswrapper[4767]: I0127 15:59:54.857823 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:59:54 crc kubenswrapper[4767]: I0127 15:59:54.858470 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.198550 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l"] Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.199739 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.202310 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.202485 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.216557 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l"] Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.391249 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.391350 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.391466 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zr9\" (UniqueName: \"kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.493154 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.493332 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.493422 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4zr9\" (UniqueName: \"kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.494253 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.502891 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.518792 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4zr9\" (UniqueName: \"kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9\") pod \"collect-profiles-29492160-tkq4l\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.527255 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:00 crc kubenswrapper[4767]: I0127 16:00:00.751701 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l"] Jan 27 16:00:00 crc kubenswrapper[4767]: W0127 16:00:00.758445 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2e87b5_39f5_453d_b824_925c37604298.slice/crio-6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55 WatchSource:0}: Error finding container 6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55: Status 404 returned error can't find the container with id 6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55 Jan 27 16:00:01 crc kubenswrapper[4767]: I0127 16:00:01.479123 4767 generic.go:334] "Generic (PLEG): container finished" podID="ef2e87b5-39f5-453d-b824-925c37604298" containerID="9b453046a7cc8d0af396d649bbc97084f9b2437b18814a0626147bd3d596bbce" exitCode=0 Jan 27 16:00:01 crc kubenswrapper[4767]: I0127 16:00:01.479184 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" event={"ID":"ef2e87b5-39f5-453d-b824-925c37604298","Type":"ContainerDied","Data":"9b453046a7cc8d0af396d649bbc97084f9b2437b18814a0626147bd3d596bbce"} Jan 27 16:00:01 crc kubenswrapper[4767]: I0127 16:00:01.479615 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" event={"ID":"ef2e87b5-39f5-453d-b824-925c37604298","Type":"ContainerStarted","Data":"6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55"} Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.755519 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.922174 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume\") pod \"ef2e87b5-39f5-453d-b824-925c37604298\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.922317 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4zr9\" (UniqueName: \"kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9\") pod \"ef2e87b5-39f5-453d-b824-925c37604298\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.922439 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume\") pod \"ef2e87b5-39f5-453d-b824-925c37604298\" (UID: \"ef2e87b5-39f5-453d-b824-925c37604298\") " Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.923590 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef2e87b5-39f5-453d-b824-925c37604298" (UID: "ef2e87b5-39f5-453d-b824-925c37604298"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.930859 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9" (OuterVolumeSpecName: "kube-api-access-r4zr9") pod "ef2e87b5-39f5-453d-b824-925c37604298" (UID: "ef2e87b5-39f5-453d-b824-925c37604298"). InnerVolumeSpecName "kube-api-access-r4zr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:02 crc kubenswrapper[4767]: I0127 16:00:02.932364 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef2e87b5-39f5-453d-b824-925c37604298" (UID: "ef2e87b5-39f5-453d-b824-925c37604298"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.024659 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2e87b5-39f5-453d-b824-925c37604298-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.024729 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef2e87b5-39f5-453d-b824-925c37604298-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.024770 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4zr9\" (UniqueName: \"kubernetes.io/projected/ef2e87b5-39f5-453d-b824-925c37604298-kube-api-access-r4zr9\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.493051 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" event={"ID":"ef2e87b5-39f5-453d-b824-925c37604298","Type":"ContainerDied","Data":"6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55"} Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.493094 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l" Jan 27 16:00:03 crc kubenswrapper[4767]: I0127 16:00:03.493097 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3126966013da890bebfbb4795ec431d569b58887e624972a60f2e08618eb55" Jan 27 16:00:24 crc kubenswrapper[4767]: I0127 16:00:24.858240 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:00:24 crc kubenswrapper[4767]: I0127 16:00:24.859265 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:00:24 crc kubenswrapper[4767]: I0127 16:00:24.859347 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:00:24 crc kubenswrapper[4767]: I0127 16:00:24.860150 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:00:24 crc kubenswrapper[4767]: I0127 16:00:24.860462 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5" gracePeriod=600 Jan 27 16:00:25 crc kubenswrapper[4767]: I0127 16:00:25.630993 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5" exitCode=0 Jan 27 16:00:25 crc kubenswrapper[4767]: I0127 16:00:25.631078 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5"} Jan 27 16:00:25 crc kubenswrapper[4767]: I0127 16:00:25.631446 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a"} Jan 27 16:00:25 crc kubenswrapper[4767]: I0127 16:00:25.631479 4767 scope.go:117] "RemoveContainer" containerID="e7ed48adaa0e9bc3ad71d07ed5596b4b1fc231c226ada212f6d4dce03922dd53" Jan 27 16:02:17 crc kubenswrapper[4767]: I0127 16:02:17.587778 4767 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.173998 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv"] Jan 27 16:02:23 crc kubenswrapper[4767]: E0127 16:02:23.174593 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2e87b5-39f5-453d-b824-925c37604298" containerName="collect-profiles" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.174613 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2e87b5-39f5-453d-b824-925c37604298" containerName="collect-profiles" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.174730 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2e87b5-39f5-453d-b824-925c37604298" containerName="collect-profiles" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.175207 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.177044 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.177287 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4trbt" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.177313 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.189593 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-52dbj"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.190290 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-52dbj" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.192970 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vgcfs" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.193911 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.210150 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w8lk4"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.210950 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.213155 4767 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gshfn" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.216773 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-52dbj"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.244023 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w8lk4"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.356154 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntch\" (UniqueName: \"kubernetes.io/projected/5b8d7fa4-0160-4913-a000-6236ad4dd951-kube-api-access-5ntch\") pod \"cert-manager-858654f9db-52dbj\" (UID: \"5b8d7fa4-0160-4913-a000-6236ad4dd951\") " pod="cert-manager/cert-manager-858654f9db-52dbj" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.356480 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjx5\" (UniqueName: \"kubernetes.io/projected/85b4a02d-b650-4c41-92a8-694ac0e43340-kube-api-access-jjjx5\") pod \"cert-manager-cainjector-cf98fcc89-wsnfv\" (UID: \"85b4a02d-b650-4c41-92a8-694ac0e43340\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.356631 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6qr\" (UniqueName: \"kubernetes.io/projected/3ed96d47-389c-4c3d-a118-21c6ba90b4db-kube-api-access-fh6qr\") pod \"cert-manager-webhook-687f57d79b-w8lk4\" (UID: \"3ed96d47-389c-4c3d-a118-21c6ba90b4db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.458002 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjx5\" (UniqueName: \"kubernetes.io/projected/85b4a02d-b650-4c41-92a8-694ac0e43340-kube-api-access-jjjx5\") pod \"cert-manager-cainjector-cf98fcc89-wsnfv\" (UID: \"85b4a02d-b650-4c41-92a8-694ac0e43340\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.458088 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh6qr\" (UniqueName: \"kubernetes.io/projected/3ed96d47-389c-4c3d-a118-21c6ba90b4db-kube-api-access-fh6qr\") pod \"cert-manager-webhook-687f57d79b-w8lk4\" (UID: \"3ed96d47-389c-4c3d-a118-21c6ba90b4db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.458146 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntch\" (UniqueName: \"kubernetes.io/projected/5b8d7fa4-0160-4913-a000-6236ad4dd951-kube-api-access-5ntch\") pod \"cert-manager-858654f9db-52dbj\" (UID: \"5b8d7fa4-0160-4913-a000-6236ad4dd951\") " pod="cert-manager/cert-manager-858654f9db-52dbj" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.477130 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjx5\" (UniqueName: \"kubernetes.io/projected/85b4a02d-b650-4c41-92a8-694ac0e43340-kube-api-access-jjjx5\") pod \"cert-manager-cainjector-cf98fcc89-wsnfv\" (UID: \"85b4a02d-b650-4c41-92a8-694ac0e43340\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.477147 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh6qr\" (UniqueName: \"kubernetes.io/projected/3ed96d47-389c-4c3d-a118-21c6ba90b4db-kube-api-access-fh6qr\") pod \"cert-manager-webhook-687f57d79b-w8lk4\" (UID: \"3ed96d47-389c-4c3d-a118-21c6ba90b4db\") " pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.477759 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntch\" (UniqueName: \"kubernetes.io/projected/5b8d7fa4-0160-4913-a000-6236ad4dd951-kube-api-access-5ntch\") pod \"cert-manager-858654f9db-52dbj\" (UID: \"5b8d7fa4-0160-4913-a000-6236ad4dd951\") " pod="cert-manager/cert-manager-858654f9db-52dbj" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.489366 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.503526 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-52dbj" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.524405 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.706863 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-52dbj"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.717912 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.748902 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-w8lk4"] Jan 27 16:02:23 crc kubenswrapper[4767]: I0127 16:02:23.791553 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv"] Jan 27 16:02:23 crc kubenswrapper[4767]: W0127 16:02:23.796094 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b4a02d_b650_4c41_92a8_694ac0e43340.slice/crio-e7c66c049574ef1fcca15856edebd2815feea51547b5ec112c63cea40c9bf8c3 WatchSource:0}: Error finding container e7c66c049574ef1fcca15856edebd2815feea51547b5ec112c63cea40c9bf8c3: Status 404 returned error can't find the container with id e7c66c049574ef1fcca15856edebd2815feea51547b5ec112c63cea40c9bf8c3 Jan 27 16:02:24 crc kubenswrapper[4767]: I0127 16:02:24.344900 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-52dbj" event={"ID":"5b8d7fa4-0160-4913-a000-6236ad4dd951","Type":"ContainerStarted","Data":"fa7ff49dfde76a948aca144f4f95529b1cec60b79f01c31dcb8b4a0017aa0576"} Jan 27 16:02:24 crc kubenswrapper[4767]: I0127 16:02:24.346672 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" event={"ID":"85b4a02d-b650-4c41-92a8-694ac0e43340","Type":"ContainerStarted","Data":"e7c66c049574ef1fcca15856edebd2815feea51547b5ec112c63cea40c9bf8c3"} Jan 27 16:02:24 crc kubenswrapper[4767]: I0127 16:02:24.347483 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" event={"ID":"3ed96d47-389c-4c3d-a118-21c6ba90b4db","Type":"ContainerStarted","Data":"015cf70b16d13675df04b40812268f2eeee7fc39c72f5784bee1ea9a96a1c9b2"} Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.695705 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x97k7"] Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696520 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-controller" containerID="cri-o://34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696649 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-acl-logging" containerID="cri-o://740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696642 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="nbdb" containerID="cri-o://6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696698 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="northd" containerID="cri-o://e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696703 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-node" containerID="cri-o://e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696751 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.696733 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="sbdb" containerID="cri-o://1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.731823 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" containerID="cri-o://f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" gracePeriod=30 Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.985739 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/3.log" Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.988238 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovn-acl-logging/0.log" Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.988738 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovn-controller/0.log" Jan 27 16:02:32 crc kubenswrapper[4767]: I0127 16:02:32.989163 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.040714 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tp5t9"] Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041032 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041060 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041076 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="sbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041088 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="sbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041107 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041119 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041132 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kubecfg-setup" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041145 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kubecfg-setup" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041159 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041171 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041180 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="nbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041189 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="nbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041223 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041234 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041250 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041260 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041279 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="northd" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041289 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="northd" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041308 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-acl-logging" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041318 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-acl-logging" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041336 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-node" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041346 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-node" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041487 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-acl-logging" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041500 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041510 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="nbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041524 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041535 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041545 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="kube-rbac-proxy-node" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041555 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="sbdb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041567 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041577 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="northd" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041589 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovn-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041704 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041714 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041825 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.041951 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.041960 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.042063 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerName="ovnkube-controller" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.043855 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086659 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086742 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086794 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086835 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086854 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086873 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086894 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086918 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086938 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.086968 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087004 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087026 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087044 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087068 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087085 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087110 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lnqj\" (UniqueName: \"kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087145 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087177 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087233 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log\") pod \"96ceb606-f7e2-4d60-a632-a9443e01b99a\" (UID: \"96ceb606-f7e2-4d60-a632-a9443e01b99a\") " Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087519 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087515 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log" (OuterVolumeSpecName: "node-log") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087599 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087632 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087659 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087686 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087912 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.087993 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088029 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash" (OuterVolumeSpecName: "host-slash") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088068 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088075 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088077 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket" (OuterVolumeSpecName: "log-socket") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088118 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088158 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088177 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088619 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.088811 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.092557 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.092878 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj" (OuterVolumeSpecName: "kube-api-access-2lnqj") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "kube-api-access-2lnqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.103336 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "96ceb606-f7e2-4d60-a632-a9443e01b99a" (UID: "96ceb606-f7e2-4d60-a632-a9443e01b99a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.187992 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-env-overrides\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188044 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188066 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-log-socket\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188087 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dng8g\" (UniqueName: \"kubernetes.io/projected/3142591e-7005-401b-9df6-6123a77310b2-kube-api-access-dng8g\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188102 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3142591e-7005-401b-9df6-6123a77310b2-ovn-node-metrics-cert\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188120 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-etc-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188166 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-ovn\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188248 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-systemd-units\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188270 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188290 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-netd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188318 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188346 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-kubelet\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-node-log\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188497 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-bin\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188545 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-systemd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188571 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-var-lib-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188608 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-script-lib\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188636 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-netns\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188657 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-config\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188687 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-slash\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188792 4767 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188814 4767 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188831 4767 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188843 4767 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188854 4767 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188866 4767 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188879 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lnqj\" (UniqueName: \"kubernetes.io/projected/96ceb606-f7e2-4d60-a632-a9443e01b99a-kube-api-access-2lnqj\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188891 4767 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188904 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188914 4767 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188925 4767 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188935 4767 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188945 4767 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188954 4767 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188964 4767 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188974 4767 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188983 4767 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.188996 4767 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.189007 4767 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96ceb606-f7e2-4d60-a632-a9443e01b99a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.189016 4767 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96ceb606-f7e2-4d60-a632-a9443e01b99a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291318 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-systemd-units\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291422 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-systemd-units\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291454 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291532 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-netd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291536 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291647 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291711 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-bin\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291729 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-kubelet\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291749 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-node-log\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291738 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-netd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291770 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-systemd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291809 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-systemd\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291849 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-var-lib-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291881 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-cni-bin\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291917 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-kubelet\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291919 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-script-lib\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291942 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-node-log\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291971 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-var-lib-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291857 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.291970 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-netns\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292012 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-config\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292035 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-slash\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292037 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-netns\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292061 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-env-overrides\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292086 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292108 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-log-socket\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292140 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dng8g\" (UniqueName: \"kubernetes.io/projected/3142591e-7005-401b-9df6-6123a77310b2-kube-api-access-dng8g\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292165 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3142591e-7005-401b-9df6-6123a77310b2-ovn-node-metrics-cert\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292189 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-etc-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292262 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-ovn\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.292348 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-run-ovn\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293053 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-config\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293092 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-slash\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293481 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-env-overrides\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293482 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293543 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-log-socket\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293675 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3142591e-7005-401b-9df6-6123a77310b2-ovnkube-script-lib\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.293786 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3142591e-7005-401b-9df6-6123a77310b2-etc-openvswitch\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.299695 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3142591e-7005-401b-9df6-6123a77310b2-ovn-node-metrics-cert\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.314377 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dng8g\" (UniqueName: \"kubernetes.io/projected/3142591e-7005-401b-9df6-6123a77310b2-kube-api-access-dng8g\") pod \"ovnkube-node-tp5t9\" (UID: \"3142591e-7005-401b-9df6-6123a77310b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.358528 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:33 crc kubenswrapper[4767]: W0127 16:02:33.385594 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3142591e_7005_401b_9df6_6123a77310b2.slice/crio-a6e52bbbd7a65380225ff4483abfd49d9ff79addda3b59e54b5ee7648b97657c WatchSource:0}: Error finding container a6e52bbbd7a65380225ff4483abfd49d9ff79addda3b59e54b5ee7648b97657c: Status 404 returned error can't find the container with id a6e52bbbd7a65380225ff4483abfd49d9ff79addda3b59e54b5ee7648b97657c Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.410620 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/2.log" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.411242 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/1.log" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.411293 4767 generic.go:334] "Generic (PLEG): container finished" podID="cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78" containerID="e7b2f4a8fda18721846ff4de34a827a6a4b72c348d58accb69f75befc4f647c5" exitCode=2 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.411381 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerDied","Data":"e7b2f4a8fda18721846ff4de34a827a6a4b72c348d58accb69f75befc4f647c5"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.411486 4767 scope.go:117] "RemoveContainer" containerID="3817cfa4a4454eef8d58130f57a16e1665d28b56c080f84edcc8f341a5e5267f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.411860 4767 scope.go:117] "RemoveContainer" containerID="e7b2f4a8fda18721846ff4de34a827a6a4b72c348d58accb69f75befc4f647c5" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.414567 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovnkube-controller/3.log" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.419742 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovn-acl-logging/0.log" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420154 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x97k7_96ceb606-f7e2-4d60-a632-a9443e01b99a/ovn-controller/0.log" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420472 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420494 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420501 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420509 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420515 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420523 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" exitCode=0 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420531 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" exitCode=143 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420539 4767 generic.go:334] "Generic (PLEG): container finished" podID="96ceb606-f7e2-4d60-a632-a9443e01b99a" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" exitCode=143 Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420639 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420657 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420714 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420731 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420743 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420756 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420769 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420776 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420783 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420789 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420796 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420803 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420809 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420814 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420857 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420866 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420875 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420882 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420888 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420893 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420900 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420905 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420931 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420938 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420943 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420949 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420958 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420966 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420973 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420978 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.420985 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421010 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421017 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421023 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421029 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421035 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421042 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421051 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" event={"ID":"96ceb606-f7e2-4d60-a632-a9443e01b99a","Type":"ContainerDied","Data":"aad581c6d3092293f8654fbcd197e311bd134a859ed2e9d73d4e66e141518e4c"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421060 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421067 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421092 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421098 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421103 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421109 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421114 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421119 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421125 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421131 4767 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.421241 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x97k7" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.424556 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" event={"ID":"3ed96d47-389c-4c3d-a118-21c6ba90b4db","Type":"ContainerStarted","Data":"0705ffbe7793d7905807a0027faccfd6dbb7f85af892b79207cf62b6e8c7f090"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.425152 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.427007 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-52dbj" event={"ID":"5b8d7fa4-0160-4913-a000-6236ad4dd951","Type":"ContainerStarted","Data":"e4ab886c3ebfc561f98474e5bd66a2bb76c50c408d138ddd2e0a7ca0fe9a596b"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.428582 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"a6e52bbbd7a65380225ff4483abfd49d9ff79addda3b59e54b5ee7648b97657c"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.432672 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" event={"ID":"85b4a02d-b650-4c41-92a8-694ac0e43340","Type":"ContainerStarted","Data":"3ad68b05cda35a944ee911667ceef251fae384f4fb8cb6eba3f8f22ad322aee0"} Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.458259 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-52dbj" podStartSLOduration=1.884742463 podStartE2EDuration="10.458193212s" podCreationTimestamp="2026-01-27 16:02:23 +0000 UTC" firstStartedPulling="2026-01-27 16:02:23.717609812 +0000 UTC m=+766.106627335" lastFinishedPulling="2026-01-27 16:02:32.291060561 +0000 UTC m=+774.680078084" observedRunningTime="2026-01-27 16:02:33.458177991 +0000 UTC m=+775.847195534" watchObservedRunningTime="2026-01-27 16:02:33.458193212 +0000 UTC m=+775.847210755" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.481479 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wsnfv" podStartSLOduration=1.988871785 podStartE2EDuration="10.481450302s" podCreationTimestamp="2026-01-27 16:02:23 +0000 UTC" firstStartedPulling="2026-01-27 16:02:23.798672329 +0000 UTC m=+766.187689852" lastFinishedPulling="2026-01-27 16:02:32.291250826 +0000 UTC m=+774.680268369" observedRunningTime="2026-01-27 16:02:33.479092104 +0000 UTC m=+775.868109647" watchObservedRunningTime="2026-01-27 16:02:33.481450302 +0000 UTC m=+775.870467845" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.495061 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" podStartSLOduration=1.8345594360000002 podStartE2EDuration="10.495042044s" podCreationTimestamp="2026-01-27 16:02:23 +0000 UTC" firstStartedPulling="2026-01-27 16:02:23.756157253 +0000 UTC m=+766.145174776" lastFinishedPulling="2026-01-27 16:02:32.416639851 +0000 UTC m=+774.805657384" observedRunningTime="2026-01-27 16:02:33.494737325 +0000 UTC m=+775.883754858" watchObservedRunningTime="2026-01-27 16:02:33.495042044 +0000 UTC m=+775.884059567" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.514612 4767 scope.go:117] "RemoveContainer" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.547111 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.554018 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x97k7"] Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.560986 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x97k7"] Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.571761 4767 scope.go:117] "RemoveContainer" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.611073 4767 scope.go:117] "RemoveContainer" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.671568 4767 scope.go:117] "RemoveContainer" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.695915 4767 scope.go:117] "RemoveContainer" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.717661 4767 scope.go:117] "RemoveContainer" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.734161 4767 scope.go:117] "RemoveContainer" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.746405 4767 scope.go:117] "RemoveContainer" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.764142 4767 scope.go:117] "RemoveContainer" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.788238 4767 scope.go:117] "RemoveContainer" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.792328 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": container with ID starting with f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d not found: ID does not exist" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.792572 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} err="failed to get container status \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": rpc error: code = NotFound desc = could not find container \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": container with ID starting with f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.792847 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.793815 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": container with ID starting with a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb not found: ID does not exist" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.794146 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} err="failed to get container status \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": rpc error: code = NotFound desc = could not find container \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": container with ID starting with a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.794587 4767 scope.go:117] "RemoveContainer" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.795285 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": container with ID starting with 1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522 not found: ID does not exist" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.795326 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} err="failed to get container status \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": rpc error: code = NotFound desc = could not find container \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": container with ID starting with 1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522 not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.795362 4767 scope.go:117] "RemoveContainer" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.796776 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": container with ID starting with 6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a not found: ID does not exist" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.796806 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} err="failed to get container status \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": rpc error: code = NotFound desc = could not find container \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": container with ID starting with 6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.796822 4767 scope.go:117] "RemoveContainer" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.802316 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": container with ID starting with e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac not found: ID does not exist" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.802504 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} err="failed to get container status \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": rpc error: code = NotFound desc = could not find container \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": container with ID starting with e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.802626 4767 scope.go:117] "RemoveContainer" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.803128 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": container with ID starting with 137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f not found: ID does not exist" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.803169 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} err="failed to get container status \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": rpc error: code = NotFound desc = could not find container \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": container with ID starting with 137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.803192 4767 scope.go:117] "RemoveContainer" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.804097 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": container with ID starting with e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a not found: ID does not exist" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.804243 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} err="failed to get container status \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": rpc error: code = NotFound desc = could not find container \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": container with ID starting with e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.804391 4767 scope.go:117] "RemoveContainer" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.804794 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": container with ID starting with 740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d not found: ID does not exist" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.804826 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} err="failed to get container status \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": rpc error: code = NotFound desc = could not find container \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": container with ID starting with 740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.804843 4767 scope.go:117] "RemoveContainer" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.805357 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": container with ID starting with 34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e not found: ID does not exist" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.805541 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} err="failed to get container status \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": rpc error: code = NotFound desc = could not find container \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": container with ID starting with 34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.805644 4767 scope.go:117] "RemoveContainer" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: E0127 16:02:33.806031 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": container with ID starting with 46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d not found: ID does not exist" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.806133 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} err="failed to get container status \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": rpc error: code = NotFound desc = could not find container \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": container with ID starting with 46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.806289 4767 scope.go:117] "RemoveContainer" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.806733 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} err="failed to get container status \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": rpc error: code = NotFound desc = could not find container \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": container with ID starting with f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.806755 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.807153 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} err="failed to get container status \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": rpc error: code = NotFound desc = could not find container \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": container with ID starting with a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.807174 4767 scope.go:117] "RemoveContainer" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.807825 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} err="failed to get container status \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": rpc error: code = NotFound desc = could not find container \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": container with ID starting with 1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522 not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.807934 4767 scope.go:117] "RemoveContainer" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.808324 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} err="failed to get container status \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": rpc error: code = NotFound desc = could not find container \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": container with ID starting with 6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.808352 4767 scope.go:117] "RemoveContainer" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.808654 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} err="failed to get container status \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": rpc error: code = NotFound desc = could not find container \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": container with ID starting with e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.808745 4767 scope.go:117] "RemoveContainer" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.809187 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} err="failed to get container status \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": rpc error: code = NotFound desc = could not find container \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": container with ID starting with 137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.809240 4767 scope.go:117] "RemoveContainer" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.809541 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} err="failed to get container status \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": rpc error: code = NotFound desc = could not find container \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": container with ID starting with e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.809643 4767 scope.go:117] "RemoveContainer" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.810147 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} err="failed to get container status \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": rpc error: code = NotFound desc = could not find container \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": container with ID starting with 740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.810173 4767 scope.go:117] "RemoveContainer" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.810482 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} err="failed to get container status \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": rpc error: code = NotFound desc = could not find container \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": container with ID starting with 34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.810669 4767 scope.go:117] "RemoveContainer" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.810996 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} err="failed to get container status \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": rpc error: code = NotFound desc = could not find container \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": container with ID starting with 46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.811019 4767 scope.go:117] "RemoveContainer" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.811306 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} err="failed to get container status \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": rpc error: code = NotFound desc = could not find container \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": container with ID starting with f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.811416 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.811747 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} err="failed to get container status \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": rpc error: code = NotFound desc = could not find container \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": container with ID starting with a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.811861 4767 scope.go:117] "RemoveContainer" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.812630 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} err="failed to get container status \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": rpc error: code = NotFound desc = could not find container \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": container with ID starting with 1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522 not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.812651 4767 scope.go:117] "RemoveContainer" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.813017 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} err="failed to get container status \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": rpc error: code = NotFound desc = could not find container \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": container with ID starting with 6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.813189 4767 scope.go:117] "RemoveContainer" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.814411 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} err="failed to get container status \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": rpc error: code = NotFound desc = could not find container \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": container with ID starting with e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.814431 4767 scope.go:117] "RemoveContainer" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.814712 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} err="failed to get container status \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": rpc error: code = NotFound desc = could not find container \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": container with ID starting with 137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.814833 4767 scope.go:117] "RemoveContainer" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.815241 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} err="failed to get container status \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": rpc error: code = NotFound desc = could not find container \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": container with ID starting with e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.815267 4767 scope.go:117] "RemoveContainer" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.816781 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} err="failed to get container status \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": rpc error: code = NotFound desc = could not find container \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": container with ID starting with 740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.816901 4767 scope.go:117] "RemoveContainer" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.817339 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} err="failed to get container status \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": rpc error: code = NotFound desc = could not find container \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": container with ID starting with 34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.817367 4767 scope.go:117] "RemoveContainer" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.817698 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} err="failed to get container status \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": rpc error: code = NotFound desc = could not find container \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": container with ID starting with 46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.817724 4767 scope.go:117] "RemoveContainer" containerID="f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818091 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d"} err="failed to get container status \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": rpc error: code = NotFound desc = could not find container \"f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d\": container with ID starting with f9cef08470d4e6c3ef23658d303da27d7714e0a1a8e9332af7d26e334fe44d3d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818126 4767 scope.go:117] "RemoveContainer" containerID="a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818501 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb"} err="failed to get container status \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": rpc error: code = NotFound desc = could not find container \"a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb\": container with ID starting with a738457c903241fb5a776c2ca052da9636d6dffda4d00b9dde48ae249818f0eb not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818528 4767 scope.go:117] "RemoveContainer" containerID="1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818838 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522"} err="failed to get container status \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": rpc error: code = NotFound desc = could not find container \"1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522\": container with ID starting with 1f9e5f966062c74569a9621c35867e98bcb457a14f313b50c357e835a7746522 not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.818865 4767 scope.go:117] "RemoveContainer" containerID="6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819113 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a"} err="failed to get container status \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": rpc error: code = NotFound desc = could not find container \"6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a\": container with ID starting with 6023a4fd0c5875f4e28015ca7bbccb5cfb1f832777ddcd1e243000d7547ed34a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819131 4767 scope.go:117] "RemoveContainer" containerID="e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819421 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac"} err="failed to get container status \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": rpc error: code = NotFound desc = could not find container \"e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac\": container with ID starting with e316fd160dc74b6b79184df4199e0ea82dd36d3f3eea3b8db95bf57011f993ac not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819439 4767 scope.go:117] "RemoveContainer" containerID="137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819700 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f"} err="failed to get container status \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": rpc error: code = NotFound desc = could not find container \"137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f\": container with ID starting with 137f1be7c78e60ab3d468074674b20a4ec615ff575c60cc03e115ef5b924707f not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.819717 4767 scope.go:117] "RemoveContainer" containerID="e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820008 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a"} err="failed to get container status \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": rpc error: code = NotFound desc = could not find container \"e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a\": container with ID starting with e805e6c0def876d07374095a07c3bc7c165ec7eb8c418b6c9fc0649637f6b67a not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820031 4767 scope.go:117] "RemoveContainer" containerID="740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820423 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d"} err="failed to get container status \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": rpc error: code = NotFound desc = could not find container \"740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d\": container with ID starting with 740b3b32709a43c8697ac215d90e28bcc35292b45b453240287e259bdbdf345d not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820444 4767 scope.go:117] "RemoveContainer" containerID="34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820724 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e"} err="failed to get container status \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": rpc error: code = NotFound desc = could not find container \"34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e\": container with ID starting with 34800e8d5df49110195fcd8c14d71c5854df9d3c884d54f7f094a76c20f8361e not found: ID does not exist" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.820753 4767 scope.go:117] "RemoveContainer" containerID="46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d" Jan 27 16:02:33 crc kubenswrapper[4767]: I0127 16:02:33.821055 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d"} err="failed to get container status \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": rpc error: code = NotFound desc = could not find container \"46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d\": container with ID starting with 46750a88b81621db54f56c5337de0264d0e0485e10dbca046b44a19c2123026d not found: ID does not exist" Jan 27 16:02:34 crc kubenswrapper[4767]: I0127 16:02:34.336801 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ceb606-f7e2-4d60-a632-a9443e01b99a" path="/var/lib/kubelet/pods/96ceb606-f7e2-4d60-a632-a9443e01b99a/volumes" Jan 27 16:02:34 crc kubenswrapper[4767]: I0127 16:02:34.449568 4767 generic.go:334] "Generic (PLEG): container finished" podID="3142591e-7005-401b-9df6-6123a77310b2" containerID="1f89f9cf0b221defe44750baa72236f1c6eb1021d02f1919752a8a3072f17956" exitCode=0 Jan 27 16:02:34 crc kubenswrapper[4767]: I0127 16:02:34.449636 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerDied","Data":"1f89f9cf0b221defe44750baa72236f1c6eb1021d02f1919752a8a3072f17956"} Jan 27 16:02:34 crc kubenswrapper[4767]: I0127 16:02:34.457653 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfxc7_cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78/kube-multus/2.log" Jan 27 16:02:34 crc kubenswrapper[4767]: I0127 16:02:34.457753 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfxc7" event={"ID":"cda4f4aa-9e6b-4fdd-b10a-10fc7ffb0e78","Type":"ContainerStarted","Data":"729351a149e0f8d89ab12c0e15b9c31b424768ad5aaa20318294292219849232"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470231 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"8a6a4adc57d2b0b9b7b8c9cc757d1fcb9f982e7f3a06b774e5021c6c26bd4a07"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470619 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"e64b34b09ee2640207a88bdb30ad67726359083176b4b962ecdda708eeb6bec9"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470654 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"0e216867b543690c863c94d50c114e750f85eeaea4d4a50b52efaf1dbfce1a9d"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470683 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"73831a73b155ba75c58a4bb0ade56f360a3479924cb264f80f0e79fe35de0365"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470709 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"ee34a5c49cfb9b96b5fdce5b584082f6903ba6dc0735170a2b0ccac8d82b0d77"} Jan 27 16:02:35 crc kubenswrapper[4767]: I0127 16:02:35.470733 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"cba20afaf53a9d8ef15f8274ab81c32ade22d43d130acc0dcbeacc1c5c99c644"} Jan 27 16:02:37 crc kubenswrapper[4767]: I0127 16:02:37.487556 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"8ecee8a9ceb83cafee100f23594d021a0c97bb676c28546387da29b23dc05851"} Jan 27 16:02:38 crc kubenswrapper[4767]: I0127 16:02:38.528478 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-w8lk4" Jan 27 16:02:40 crc kubenswrapper[4767]: I0127 16:02:40.508806 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" event={"ID":"3142591e-7005-401b-9df6-6123a77310b2","Type":"ContainerStarted","Data":"1cefd6593090387cf72561b9031e6f0024d13c588f14b25bc2f310cce2d4a976"} Jan 27 16:02:40 crc kubenswrapper[4767]: I0127 16:02:40.511870 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:40 crc kubenswrapper[4767]: I0127 16:02:40.543426 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:40 crc kubenswrapper[4767]: I0127 16:02:40.544729 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" podStartSLOduration=7.544701498 podStartE2EDuration="7.544701498s" podCreationTimestamp="2026-01-27 16:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:02:40.543166124 +0000 UTC m=+782.932183677" watchObservedRunningTime="2026-01-27 16:02:40.544701498 +0000 UTC m=+782.933719031" Jan 27 16:02:41 crc kubenswrapper[4767]: I0127 16:02:41.513822 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:41 crc kubenswrapper[4767]: I0127 16:02:41.513903 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:41 crc kubenswrapper[4767]: I0127 16:02:41.586181 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:02:54 crc kubenswrapper[4767]: I0127 16:02:54.857805 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:02:54 crc kubenswrapper[4767]: I0127 16:02:54.858277 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:03:03 crc kubenswrapper[4767]: I0127 16:03:03.381978 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tp5t9" Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.861103 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc"] Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.863017 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.865314 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.872348 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc"] Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.925989 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.926091 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghkt\" (UniqueName: \"kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:12 crc kubenswrapper[4767]: I0127 16:03:12.926284 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.027376 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.027443 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghkt\" (UniqueName: \"kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.027480 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.028050 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.028393 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.046825 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghkt\" (UniqueName: \"kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.181644 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.411363 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc"] Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.715390 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerStarted","Data":"75ca2145489f25c995d5b5418743311a3ad166e4d3a96c1147677399f09d96bc"} Jan 27 16:03:13 crc kubenswrapper[4767]: I0127 16:03:13.715790 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerStarted","Data":"f953c832d8f4e4aa9b874dfc26c5bf3cd52f52ea52f35db614d8da9de2f9f160"} Jan 27 16:03:14 crc kubenswrapper[4767]: I0127 16:03:14.723396 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c66df55-20ac-4827-b531-7284399769c1" containerID="75ca2145489f25c995d5b5418743311a3ad166e4d3a96c1147677399f09d96bc" exitCode=0 Jan 27 16:03:14 crc kubenswrapper[4767]: I0127 16:03:14.723470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerDied","Data":"75ca2145489f25c995d5b5418743311a3ad166e4d3a96c1147677399f09d96bc"} Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.203462 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.205380 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.210554 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.362413 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.362482 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.362597 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kv29\" (UniqueName: \"kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.463236 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.463319 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kv29\" (UniqueName: \"kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.463375 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.463716 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.463776 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.486850 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kv29\" (UniqueName: \"kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29\") pod \"redhat-operators-lm56n\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.553577 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:15 crc kubenswrapper[4767]: I0127 16:03:15.740219 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:16 crc kubenswrapper[4767]: I0127 16:03:16.735192 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c66df55-20ac-4827-b531-7284399769c1" containerID="e465366f012279712e9c66f1ecfba049b1211b8f4d324efdf374a73fb04dff50" exitCode=0 Jan 27 16:03:16 crc kubenswrapper[4767]: I0127 16:03:16.735269 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerDied","Data":"e465366f012279712e9c66f1ecfba049b1211b8f4d324efdf374a73fb04dff50"} Jan 27 16:03:16 crc kubenswrapper[4767]: I0127 16:03:16.737284 4767 generic.go:334] "Generic (PLEG): container finished" podID="fd2789c8-0e03-4824-a876-61a5868c8691" containerID="43debd9a7702548c68bc24e6832ab233123858e871b9d14dac60b7afd1a1f440" exitCode=0 Jan 27 16:03:16 crc kubenswrapper[4767]: I0127 16:03:16.737309 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerDied","Data":"43debd9a7702548c68bc24e6832ab233123858e871b9d14dac60b7afd1a1f440"} Jan 27 16:03:16 crc kubenswrapper[4767]: I0127 16:03:16.737325 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerStarted","Data":"3d72b5f2da41baa66cfbbe869b3995375909ee5bd1ece6e1e38aba3a51b9524b"} Jan 27 16:03:17 crc kubenswrapper[4767]: I0127 16:03:17.747834 4767 generic.go:334] "Generic (PLEG): container finished" podID="8c66df55-20ac-4827-b531-7284399769c1" containerID="438224cc5e54fa79a552143b1d4fbded7e326e31e29c4f466d6ffbabc12d5c30" exitCode=0 Jan 27 16:03:17 crc kubenswrapper[4767]: I0127 16:03:17.747966 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerDied","Data":"438224cc5e54fa79a552143b1d4fbded7e326e31e29c4f466d6ffbabc12d5c30"} Jan 27 16:03:18 crc kubenswrapper[4767]: I0127 16:03:18.758803 4767 generic.go:334] "Generic (PLEG): container finished" podID="fd2789c8-0e03-4824-a876-61a5868c8691" containerID="834e727019ded8d0da74ff28802b269e79178810b29d9891cfa7341edde09b44" exitCode=0 Jan 27 16:03:18 crc kubenswrapper[4767]: I0127 16:03:18.758912 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerDied","Data":"834e727019ded8d0da74ff28802b269e79178810b29d9891cfa7341edde09b44"} Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.014421 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.110139 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cghkt\" (UniqueName: \"kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt\") pod \"8c66df55-20ac-4827-b531-7284399769c1\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.110323 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util\") pod \"8c66df55-20ac-4827-b531-7284399769c1\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.110400 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle\") pod \"8c66df55-20ac-4827-b531-7284399769c1\" (UID: \"8c66df55-20ac-4827-b531-7284399769c1\") " Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.112165 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle" (OuterVolumeSpecName: "bundle") pod "8c66df55-20ac-4827-b531-7284399769c1" (UID: "8c66df55-20ac-4827-b531-7284399769c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.116761 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt" (OuterVolumeSpecName: "kube-api-access-cghkt") pod "8c66df55-20ac-4827-b531-7284399769c1" (UID: "8c66df55-20ac-4827-b531-7284399769c1"). InnerVolumeSpecName "kube-api-access-cghkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.213692 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.213754 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cghkt\" (UniqueName: \"kubernetes.io/projected/8c66df55-20ac-4827-b531-7284399769c1-kube-api-access-cghkt\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.303493 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util" (OuterVolumeSpecName: "util") pod "8c66df55-20ac-4827-b531-7284399769c1" (UID: "8c66df55-20ac-4827-b531-7284399769c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.315522 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c66df55-20ac-4827-b531-7284399769c1-util\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.774305 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" event={"ID":"8c66df55-20ac-4827-b531-7284399769c1","Type":"ContainerDied","Data":"f953c832d8f4e4aa9b874dfc26c5bf3cd52f52ea52f35db614d8da9de2f9f160"} Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.774669 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f953c832d8f4e4aa9b874dfc26c5bf3cd52f52ea52f35db614d8da9de2f9f160" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.774780 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc" Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.779678 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerStarted","Data":"be8e18e2529c7c5e4d2e896d445fd47101cd31d3026e1e974226180e0eb6477c"} Jan 27 16:03:19 crc kubenswrapper[4767]: I0127 16:03:19.803762 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lm56n" podStartSLOduration=2.111536273 podStartE2EDuration="4.803728493s" podCreationTimestamp="2026-01-27 16:03:15 +0000 UTC" firstStartedPulling="2026-01-27 16:03:16.738129547 +0000 UTC m=+819.127147070" lastFinishedPulling="2026-01-27 16:03:19.430321767 +0000 UTC m=+821.819339290" observedRunningTime="2026-01-27 16:03:19.799428849 +0000 UTC m=+822.188446392" watchObservedRunningTime="2026-01-27 16:03:19.803728493 +0000 UTC m=+822.192746016" Jan 27 16:03:24 crc kubenswrapper[4767]: I0127 16:03:24.857978 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:03:24 crc kubenswrapper[4767]: I0127 16:03:24.858265 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:03:25 crc kubenswrapper[4767]: I0127 16:03:25.553706 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:25 crc kubenswrapper[4767]: I0127 16:03:25.553761 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:26 crc kubenswrapper[4767]: I0127 16:03:26.769452 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lm56n" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="registry-server" probeResult="failure" output=< Jan 27 16:03:26 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 16:03:26 crc kubenswrapper[4767]: > Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.574922 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj"] Jan 27 16:03:30 crc kubenswrapper[4767]: E0127 16:03:30.575533 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="pull" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.575570 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="pull" Jan 27 16:03:30 crc kubenswrapper[4767]: E0127 16:03:30.575590 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="util" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.575597 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="util" Jan 27 16:03:30 crc kubenswrapper[4767]: E0127 16:03:30.575608 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="extract" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.575616 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="extract" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.575739 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c66df55-20ac-4827-b531-7284399769c1" containerName="extract" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.576218 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.578147 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9qmq2" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.578249 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.578974 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.595520 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.648459 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmbq\" (UniqueName: \"kubernetes.io/projected/f62bd883-1c36-4ad3-973c-ab9aadf07f1d-kube-api-access-5dmbq\") pod \"obo-prometheus-operator-68bc856cb9-bqzvj\" (UID: \"f62bd883-1c36-4ad3-973c-ab9aadf07f1d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.699782 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.700470 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.702285 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-sxvns" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.702918 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.706784 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.707414 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.714446 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.731167 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.750109 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.750189 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.750252 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.750308 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dmbq\" (UniqueName: \"kubernetes.io/projected/f62bd883-1c36-4ad3-973c-ab9aadf07f1d-kube-api-access-5dmbq\") pod \"obo-prometheus-operator-68bc856cb9-bqzvj\" (UID: \"f62bd883-1c36-4ad3-973c-ab9aadf07f1d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.750347 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.775371 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dmbq\" (UniqueName: \"kubernetes.io/projected/f62bd883-1c36-4ad3-973c-ab9aadf07f1d-kube-api-access-5dmbq\") pod \"obo-prometheus-operator-68bc856cb9-bqzvj\" (UID: \"f62bd883-1c36-4ad3-973c-ab9aadf07f1d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.851687 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.851820 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.852015 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.852428 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.855229 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.855184 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ed4e2f4-af9f-489a-94ac-d408167207a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg\" (UID: \"0ed4e2f4-af9f-489a-94ac-d408167207a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.855755 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.856601 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4354c097-733d-43f2-a75f-84763c81d018-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb\" (UID: \"4354c097-733d-43f2-a75f-84763c81d018\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.894761 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.920275 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dwt87"] Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.920978 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.922685 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rfpxj" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.922897 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 16:03:30 crc kubenswrapper[4767]: I0127 16:03:30.935637 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dwt87"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.017750 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.031144 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.055897 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/b10d2607-d09e-4025-92a6-9eeb1d37f536-kube-api-access-46jnl\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.055966 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10d2607-d09e-4025-92a6-9eeb1d37f536-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.116774 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8vdc5"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.117678 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.123548 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cjhh9" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.149795 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8vdc5"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.160302 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/b10d2607-d09e-4025-92a6-9eeb1d37f536-kube-api-access-46jnl\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.160372 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10d2607-d09e-4025-92a6-9eeb1d37f536-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.203154 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10d2607-d09e-4025-92a6-9eeb1d37f536-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.208971 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jnl\" (UniqueName: \"kubernetes.io/projected/b10d2607-d09e-4025-92a6-9eeb1d37f536-kube-api-access-46jnl\") pod \"observability-operator-59bdc8b94-dwt87\" (UID: \"b10d2607-d09e-4025-92a6-9eeb1d37f536\") " pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.263793 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2b22\" (UniqueName: \"kubernetes.io/projected/ae225e20-7835-4f58-abe2-12416dfabe72-kube-api-access-x2b22\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.263873 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae225e20-7835-4f58-abe2-12416dfabe72-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.297606 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.323837 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj"] Jan 27 16:03:31 crc kubenswrapper[4767]: W0127 16:03:31.353140 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf62bd883_1c36_4ad3_973c_ab9aadf07f1d.slice/crio-a9b64f7ca6f7da25d0b818e5eb26e53d828a045b8732736177b8a2000bf8f15c WatchSource:0}: Error finding container a9b64f7ca6f7da25d0b818e5eb26e53d828a045b8732736177b8a2000bf8f15c: Status 404 returned error can't find the container with id a9b64f7ca6f7da25d0b818e5eb26e53d828a045b8732736177b8a2000bf8f15c Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.365740 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2b22\" (UniqueName: \"kubernetes.io/projected/ae225e20-7835-4f58-abe2-12416dfabe72-kube-api-access-x2b22\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.365781 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae225e20-7835-4f58-abe2-12416dfabe72-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.366591 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae225e20-7835-4f58-abe2-12416dfabe72-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.382899 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2b22\" (UniqueName: \"kubernetes.io/projected/ae225e20-7835-4f58-abe2-12416dfabe72-kube-api-access-x2b22\") pod \"perses-operator-5bf474d74f-8vdc5\" (UID: \"ae225e20-7835-4f58-abe2-12416dfabe72\") " pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.495045 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.529436 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.604924 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dwt87"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.622365 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb"] Jan 27 16:03:31 crc kubenswrapper[4767]: W0127 16:03:31.625879 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4354c097_733d_43f2_a75f_84763c81d018.slice/crio-507b9e906781c625137d2504aa0fd17e7e8002581406c18f3b26dae1ecc63fae WatchSource:0}: Error finding container 507b9e906781c625137d2504aa0fd17e7e8002581406c18f3b26dae1ecc63fae: Status 404 returned error can't find the container with id 507b9e906781c625137d2504aa0fd17e7e8002581406c18f3b26dae1ecc63fae Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.775967 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8vdc5"] Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.835912 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" event={"ID":"4354c097-733d-43f2-a75f-84763c81d018","Type":"ContainerStarted","Data":"507b9e906781c625137d2504aa0fd17e7e8002581406c18f3b26dae1ecc63fae"} Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.850354 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" event={"ID":"ae225e20-7835-4f58-abe2-12416dfabe72","Type":"ContainerStarted","Data":"0632bfc3706c298cfff7d7c4a6b258edc53051068ad939f7218e81d1e4dc8baf"} Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.854090 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" event={"ID":"f62bd883-1c36-4ad3-973c-ab9aadf07f1d","Type":"ContainerStarted","Data":"a9b64f7ca6f7da25d0b818e5eb26e53d828a045b8732736177b8a2000bf8f15c"} Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.855701 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" event={"ID":"b10d2607-d09e-4025-92a6-9eeb1d37f536","Type":"ContainerStarted","Data":"2cce22edad033d112d8787bdfe09e0a25045d860adc3bfa1a9fce68122b5efdb"} Jan 27 16:03:31 crc kubenswrapper[4767]: I0127 16:03:31.857938 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" event={"ID":"0ed4e2f4-af9f-489a-94ac-d408167207a6","Type":"ContainerStarted","Data":"fbe559ec5413a29141c2ff16271de9629c24d07c38c67be25a45afd49d39ff86"} Jan 27 16:03:35 crc kubenswrapper[4767]: I0127 16:03:35.601851 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:35 crc kubenswrapper[4767]: I0127 16:03:35.732608 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:35 crc kubenswrapper[4767]: I0127 16:03:35.846502 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:36 crc kubenswrapper[4767]: I0127 16:03:36.886761 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lm56n" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="registry-server" containerID="cri-o://be8e18e2529c7c5e4d2e896d445fd47101cd31d3026e1e974226180e0eb6477c" gracePeriod=2 Jan 27 16:03:37 crc kubenswrapper[4767]: I0127 16:03:37.938624 4767 generic.go:334] "Generic (PLEG): container finished" podID="fd2789c8-0e03-4824-a876-61a5868c8691" containerID="be8e18e2529c7c5e4d2e896d445fd47101cd31d3026e1e974226180e0eb6477c" exitCode=0 Jan 27 16:03:37 crc kubenswrapper[4767]: I0127 16:03:37.938682 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerDied","Data":"be8e18e2529c7c5e4d2e896d445fd47101cd31d3026e1e974226180e0eb6477c"} Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.325477 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.401395 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content\") pod \"fd2789c8-0e03-4824-a876-61a5868c8691\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.401446 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities\") pod \"fd2789c8-0e03-4824-a876-61a5868c8691\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.401476 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kv29\" (UniqueName: \"kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29\") pod \"fd2789c8-0e03-4824-a876-61a5868c8691\" (UID: \"fd2789c8-0e03-4824-a876-61a5868c8691\") " Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.406116 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities" (OuterVolumeSpecName: "utilities") pod "fd2789c8-0e03-4824-a876-61a5868c8691" (UID: "fd2789c8-0e03-4824-a876-61a5868c8691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.410576 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29" (OuterVolumeSpecName: "kube-api-access-6kv29") pod "fd2789c8-0e03-4824-a876-61a5868c8691" (UID: "fd2789c8-0e03-4824-a876-61a5868c8691"). InnerVolumeSpecName "kube-api-access-6kv29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.503311 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.503345 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kv29\" (UniqueName: \"kubernetes.io/projected/fd2789c8-0e03-4824-a876-61a5868c8691-kube-api-access-6kv29\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.542136 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd2789c8-0e03-4824-a876-61a5868c8691" (UID: "fd2789c8-0e03-4824-a876-61a5868c8691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.605411 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd2789c8-0e03-4824-a876-61a5868c8691-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.974792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm56n" event={"ID":"fd2789c8-0e03-4824-a876-61a5868c8691","Type":"ContainerDied","Data":"3d72b5f2da41baa66cfbbe869b3995375909ee5bd1ece6e1e38aba3a51b9524b"} Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.975152 4767 scope.go:117] "RemoveContainer" containerID="be8e18e2529c7c5e4d2e896d445fd47101cd31d3026e1e974226180e0eb6477c" Jan 27 16:03:39 crc kubenswrapper[4767]: I0127 16:03:39.975319 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm56n" Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.005594 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.011962 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lm56n"] Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.183071 4767 scope.go:117] "RemoveContainer" containerID="834e727019ded8d0da74ff28802b269e79178810b29d9891cfa7341edde09b44" Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.229454 4767 scope.go:117] "RemoveContainer" containerID="43debd9a7702548c68bc24e6832ab233123858e871b9d14dac60b7afd1a1f440" Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.337861 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" path="/var/lib/kubelet/pods/fd2789c8-0e03-4824-a876-61a5868c8691/volumes" Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.987257 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" event={"ID":"4354c097-733d-43f2-a75f-84763c81d018","Type":"ContainerStarted","Data":"41e430e6ce91b9ea7210692e2ca267627d7f9e60febb7932313d4618e6ea46a8"} Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.991038 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" event={"ID":"ae225e20-7835-4f58-abe2-12416dfabe72","Type":"ContainerStarted","Data":"5ab9bd104bcda88dbd463bcfe556e81b96ba97ee27bc16bd20ecb37754c21d05"} Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.991167 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.995859 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" event={"ID":"f62bd883-1c36-4ad3-973c-ab9aadf07f1d","Type":"ContainerStarted","Data":"2d16ae1f87c73599060bce3925bc6754e6a3296e7d817d4eb0dfc35a128f279d"} Jan 27 16:03:40 crc kubenswrapper[4767]: I0127 16:03:40.998323 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" event={"ID":"0ed4e2f4-af9f-489a-94ac-d408167207a6","Type":"ContainerStarted","Data":"f76faeac8c2401d02f25686129d076269711d7672eaa7e759dbdcdcd95e0aaab"} Jan 27 16:03:41 crc kubenswrapper[4767]: I0127 16:03:41.007376 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb" podStartSLOduration=2.418630533 podStartE2EDuration="11.007343682s" podCreationTimestamp="2026-01-27 16:03:30 +0000 UTC" firstStartedPulling="2026-01-27 16:03:31.640658702 +0000 UTC m=+834.029676225" lastFinishedPulling="2026-01-27 16:03:40.229371851 +0000 UTC m=+842.618389374" observedRunningTime="2026-01-27 16:03:41.005614552 +0000 UTC m=+843.394632085" watchObservedRunningTime="2026-01-27 16:03:41.007343682 +0000 UTC m=+843.396361235" Jan 27 16:03:41 crc kubenswrapper[4767]: I0127 16:03:41.036541 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bqzvj" podStartSLOduration=2.143979724 podStartE2EDuration="11.036518923s" podCreationTimestamp="2026-01-27 16:03:30 +0000 UTC" firstStartedPulling="2026-01-27 16:03:31.384800365 +0000 UTC m=+833.773817888" lastFinishedPulling="2026-01-27 16:03:40.277339564 +0000 UTC m=+842.666357087" observedRunningTime="2026-01-27 16:03:41.031153288 +0000 UTC m=+843.420170821" watchObservedRunningTime="2026-01-27 16:03:41.036518923 +0000 UTC m=+843.425536446" Jan 27 16:03:41 crc kubenswrapper[4767]: I0127 16:03:41.058349 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg" podStartSLOduration=2.328518984 podStartE2EDuration="11.058326331s" podCreationTimestamp="2026-01-27 16:03:30 +0000 UTC" firstStartedPulling="2026-01-27 16:03:31.506306598 +0000 UTC m=+833.895324111" lastFinishedPulling="2026-01-27 16:03:40.236113925 +0000 UTC m=+842.625131458" observedRunningTime="2026-01-27 16:03:41.053632626 +0000 UTC m=+843.442650159" watchObservedRunningTime="2026-01-27 16:03:41.058326331 +0000 UTC m=+843.447343855" Jan 27 16:03:41 crc kubenswrapper[4767]: I0127 16:03:41.128081 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" podStartSLOduration=1.684161568 podStartE2EDuration="10.128056332s" podCreationTimestamp="2026-01-27 16:03:31 +0000 UTC" firstStartedPulling="2026-01-27 16:03:31.792962663 +0000 UTC m=+834.181980186" lastFinishedPulling="2026-01-27 16:03:40.236857427 +0000 UTC m=+842.625874950" observedRunningTime="2026-01-27 16:03:41.120991658 +0000 UTC m=+843.510009171" watchObservedRunningTime="2026-01-27 16:03:41.128056332 +0000 UTC m=+843.517073855" Jan 27 16:03:47 crc kubenswrapper[4767]: I0127 16:03:47.041182 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" event={"ID":"b10d2607-d09e-4025-92a6-9eeb1d37f536","Type":"ContainerStarted","Data":"f776084e550437d8ae8fec9f09676c73df33c8c0f00af781e6c5aaba9b54120d"} Jan 27 16:03:47 crc kubenswrapper[4767]: I0127 16:03:47.042092 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:47 crc kubenswrapper[4767]: I0127 16:03:47.043599 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" Jan 27 16:03:47 crc kubenswrapper[4767]: I0127 16:03:47.067142 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-dwt87" podStartSLOduration=2.563908071 podStartE2EDuration="17.067122331s" podCreationTimestamp="2026-01-27 16:03:30 +0000 UTC" firstStartedPulling="2026-01-27 16:03:31.617647609 +0000 UTC m=+834.006665132" lastFinishedPulling="2026-01-27 16:03:46.120861869 +0000 UTC m=+848.509879392" observedRunningTime="2026-01-27 16:03:47.064463625 +0000 UTC m=+849.453481168" watchObservedRunningTime="2026-01-27 16:03:47.067122331 +0000 UTC m=+849.456139864" Jan 27 16:03:51 crc kubenswrapper[4767]: I0127 16:03:51.532127 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8vdc5" Jan 27 16:03:54 crc kubenswrapper[4767]: I0127 16:03:54.857462 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:03:54 crc kubenswrapper[4767]: I0127 16:03:54.857791 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:03:54 crc kubenswrapper[4767]: I0127 16:03:54.857836 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:03:54 crc kubenswrapper[4767]: I0127 16:03:54.858435 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:03:54 crc kubenswrapper[4767]: I0127 16:03:54.858482 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a" gracePeriod=600 Jan 27 16:03:55 crc kubenswrapper[4767]: I0127 16:03:55.103812 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a" exitCode=0 Jan 27 16:03:55 crc kubenswrapper[4767]: I0127 16:03:55.104018 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a"} Jan 27 16:03:55 crc kubenswrapper[4767]: I0127 16:03:55.104273 4767 scope.go:117] "RemoveContainer" containerID="353e2744423f1b1adbab04b1b018d0bf34fbc9cefa51f745c7fff9315767a5a5" Jan 27 16:03:56 crc kubenswrapper[4767]: I0127 16:03:56.110374 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96"} Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.652688 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2"] Jan 27 16:04:09 crc kubenswrapper[4767]: E0127 16:04:09.653481 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="registry-server" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.653498 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="registry-server" Jan 27 16:04:09 crc kubenswrapper[4767]: E0127 16:04:09.653521 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="extract-content" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.653528 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="extract-content" Jan 27 16:04:09 crc kubenswrapper[4767]: E0127 16:04:09.653541 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="extract-utilities" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.653548 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="extract-utilities" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.653664 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2789c8-0e03-4824-a876-61a5868c8691" containerName="registry-server" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.654607 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.661064 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.666292 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2"] Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.803649 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26qd\" (UniqueName: \"kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.803844 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.803904 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.905733 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w26qd\" (UniqueName: \"kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.905819 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.905853 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.906484 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.906604 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:09 crc kubenswrapper[4767]: I0127 16:04:09.929064 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w26qd\" (UniqueName: \"kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:10 crc kubenswrapper[4767]: I0127 16:04:10.005578 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:10 crc kubenswrapper[4767]: I0127 16:04:10.407169 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2"] Jan 27 16:04:11 crc kubenswrapper[4767]: I0127 16:04:11.215064 4767 generic.go:334] "Generic (PLEG): container finished" podID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerID="3d658d5909e520293dd85d1c5d3f2605db98f961494f82ce4547df10981831fd" exitCode=0 Jan 27 16:04:11 crc kubenswrapper[4767]: I0127 16:04:11.215123 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" event={"ID":"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36","Type":"ContainerDied","Data":"3d658d5909e520293dd85d1c5d3f2605db98f961494f82ce4547df10981831fd"} Jan 27 16:04:11 crc kubenswrapper[4767]: I0127 16:04:11.215363 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" event={"ID":"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36","Type":"ContainerStarted","Data":"22618fa7207adcc303030cc97d63ce125711eff3c9604fbced4383f8c098ee5c"} Jan 27 16:04:14 crc kubenswrapper[4767]: I0127 16:04:14.235732 4767 generic.go:334] "Generic (PLEG): container finished" podID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerID="85f1829bf80bc2b3aa1823807df478dacb6dca978f5edea5758bf03090009da4" exitCode=0 Jan 27 16:04:14 crc kubenswrapper[4767]: I0127 16:04:14.235849 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" event={"ID":"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36","Type":"ContainerDied","Data":"85f1829bf80bc2b3aa1823807df478dacb6dca978f5edea5758bf03090009da4"} Jan 27 16:04:15 crc kubenswrapper[4767]: I0127 16:04:15.244605 4767 generic.go:334] "Generic (PLEG): container finished" podID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerID="aa4787a46c05e46fc4cca4b8a160d7114e62c6241e8114843e382d8ff9b18954" exitCode=0 Jan 27 16:04:15 crc kubenswrapper[4767]: I0127 16:04:15.244670 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" event={"ID":"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36","Type":"ContainerDied","Data":"aa4787a46c05e46fc4cca4b8a160d7114e62c6241e8114843e382d8ff9b18954"} Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.473161 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.593024 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle\") pod \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.593084 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util\") pod \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.593163 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w26qd\" (UniqueName: \"kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd\") pod \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\" (UID: \"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36\") " Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.593703 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle" (OuterVolumeSpecName: "bundle") pod "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" (UID: "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.599008 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd" (OuterVolumeSpecName: "kube-api-access-w26qd") pod "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" (UID: "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36"). InnerVolumeSpecName "kube-api-access-w26qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.607330 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util" (OuterVolumeSpecName: "util") pod "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" (UID: "47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.697862 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w26qd\" (UniqueName: \"kubernetes.io/projected/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-kube-api-access-w26qd\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.697950 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:16 crc kubenswrapper[4767]: I0127 16:04:16.697974 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36-util\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:17 crc kubenswrapper[4767]: I0127 16:04:17.259471 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" event={"ID":"47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36","Type":"ContainerDied","Data":"22618fa7207adcc303030cc97d63ce125711eff3c9604fbced4383f8c098ee5c"} Jan 27 16:04:17 crc kubenswrapper[4767]: I0127 16:04:17.259516 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22618fa7207adcc303030cc97d63ce125711eff3c9604fbced4383f8c098ee5c" Jan 27 16:04:17 crc kubenswrapper[4767]: I0127 16:04:17.259609 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.066011 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pktvq"] Jan 27 16:04:21 crc kubenswrapper[4767]: E0127 16:04:21.066843 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="pull" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.066856 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="pull" Jan 27 16:04:21 crc kubenswrapper[4767]: E0127 16:04:21.066879 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="util" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.066887 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="util" Jan 27 16:04:21 crc kubenswrapper[4767]: E0127 16:04:21.066895 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="extract" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.066902 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="extract" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.067154 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36" containerName="extract" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.067716 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.071509 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-h7xfn" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.071548 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.074109 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.091807 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pktvq"] Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.256558 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67w5r\" (UniqueName: \"kubernetes.io/projected/cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd-kube-api-access-67w5r\") pod \"nmstate-operator-646758c888-pktvq\" (UID: \"cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd\") " pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.357956 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67w5r\" (UniqueName: \"kubernetes.io/projected/cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd-kube-api-access-67w5r\") pod \"nmstate-operator-646758c888-pktvq\" (UID: \"cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd\") " pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.383537 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67w5r\" (UniqueName: \"kubernetes.io/projected/cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd-kube-api-access-67w5r\") pod \"nmstate-operator-646758c888-pktvq\" (UID: \"cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd\") " pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.428644 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" Jan 27 16:04:21 crc kubenswrapper[4767]: I0127 16:04:21.816700 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pktvq"] Jan 27 16:04:22 crc kubenswrapper[4767]: I0127 16:04:22.284573 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" event={"ID":"cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd","Type":"ContainerStarted","Data":"f944908a9003d9f811ed42d1454305310fbf4221b46d94be592052376f5eafd5"} Jan 27 16:04:25 crc kubenswrapper[4767]: I0127 16:04:25.304662 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" event={"ID":"cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd","Type":"ContainerStarted","Data":"12c8643886fab7ba7f608b21650c15d95702541242ec76cc483169d7343a2cc5"} Jan 27 16:04:25 crc kubenswrapper[4767]: I0127 16:04:25.323070 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pktvq" podStartSLOduration=1.971131287 podStartE2EDuration="4.323054795s" podCreationTimestamp="2026-01-27 16:04:21 +0000 UTC" firstStartedPulling="2026-01-27 16:04:21.82865751 +0000 UTC m=+884.217675033" lastFinishedPulling="2026-01-27 16:04:24.180581018 +0000 UTC m=+886.569598541" observedRunningTime="2026-01-27 16:04:25.320032228 +0000 UTC m=+887.709049771" watchObservedRunningTime="2026-01-27 16:04:25.323054795 +0000 UTC m=+887.712072318" Jan 27 16:04:30 crc kubenswrapper[4767]: I0127 16:04:30.970629 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6hsq7"] Jan 27 16:04:30 crc kubenswrapper[4767]: I0127 16:04:30.972118 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" Jan 27 16:04:30 crc kubenswrapper[4767]: I0127 16:04:30.974974 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lvmst" Jan 27 16:04:30 crc kubenswrapper[4767]: I0127 16:04:30.990980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9j8g\" (UniqueName: \"kubernetes.io/projected/74feff31-d5c9-4aa8-8789-95a64e2811e5-kube-api-access-j9j8g\") pod \"nmstate-metrics-54757c584b-6hsq7\" (UID: \"74feff31-d5c9-4aa8-8789-95a64e2811e5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" Jan 27 16:04:30 crc kubenswrapper[4767]: I0127 16:04:30.995430 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6hsq7"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.001117 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.001879 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.003841 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.025253 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-czz6l"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.025985 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.041704 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.092763 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-dbus-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.092841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.092886 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbtk\" (UniqueName: \"kubernetes.io/projected/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-kube-api-access-bfbtk\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.092947 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9j8g\" (UniqueName: \"kubernetes.io/projected/74feff31-d5c9-4aa8-8789-95a64e2811e5-kube-api-access-j9j8g\") pod \"nmstate-metrics-54757c584b-6hsq7\" (UID: \"74feff31-d5c9-4aa8-8789-95a64e2811e5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.092977 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-nmstate-lock\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.093014 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjqs\" (UniqueName: \"kubernetes.io/projected/51f8969c-3967-4f5f-b101-94e942f01395-kube-api-access-2tjqs\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.093041 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-ovs-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.115165 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.116032 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.118542 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.118830 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-thzr8" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.118860 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.132815 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9j8g\" (UniqueName: \"kubernetes.io/projected/74feff31-d5c9-4aa8-8789-95a64e2811e5-kube-api-access-j9j8g\") pod \"nmstate-metrics-54757c584b-6hsq7\" (UID: \"74feff31-d5c9-4aa8-8789-95a64e2811e5\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.132824 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.193941 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbtk\" (UniqueName: \"kubernetes.io/projected/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-kube-api-access-bfbtk\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194025 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzgvb\" (UniqueName: \"kubernetes.io/projected/2c3d4579-619c-4e0a-b802-067688bc9a2f-kube-api-access-tzgvb\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194060 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-nmstate-lock\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194094 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjqs\" (UniqueName: \"kubernetes.io/projected/51f8969c-3967-4f5f-b101-94e942f01395-kube-api-access-2tjqs\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194109 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-ovs-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194135 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194153 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-dbus-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194155 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-nmstate-lock\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194172 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c3d4579-619c-4e0a-b802-067688bc9a2f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194226 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-ovs-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194296 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: E0127 16:04:31.194422 4767 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 16:04:31 crc kubenswrapper[4767]: E0127 16:04:31.194475 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair podName:ec8ec347-f0ff-4091-a020-c69c4d4d9bda nodeName:}" failed. No retries permitted until 2026-01-27 16:04:31.694455107 +0000 UTC m=+894.083472630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rlr2t" (UID: "ec8ec347-f0ff-4091-a020-c69c4d4d9bda") : secret "openshift-nmstate-webhook" not found Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.194810 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51f8969c-3967-4f5f-b101-94e942f01395-dbus-socket\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.237952 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjqs\" (UniqueName: \"kubernetes.io/projected/51f8969c-3967-4f5f-b101-94e942f01395-kube-api-access-2tjqs\") pod \"nmstate-handler-czz6l\" (UID: \"51f8969c-3967-4f5f-b101-94e942f01395\") " pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.286693 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbtk\" (UniqueName: \"kubernetes.io/projected/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-kube-api-access-bfbtk\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.290463 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.295526 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzgvb\" (UniqueName: \"kubernetes.io/projected/2c3d4579-619c-4e0a-b802-067688bc9a2f-kube-api-access-tzgvb\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.295598 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.295626 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c3d4579-619c-4e0a-b802-067688bc9a2f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: E0127 16:04:31.296144 4767 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 16:04:31 crc kubenswrapper[4767]: E0127 16:04:31.296271 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert podName:2c3d4579-619c-4e0a-b802-067688bc9a2f nodeName:}" failed. No retries permitted until 2026-01-27 16:04:31.796247576 +0000 UTC m=+894.185265279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-mzlgz" (UID: "2c3d4579-619c-4e0a-b802-067688bc9a2f") : secret "plugin-serving-cert" not found Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.296761 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c3d4579-619c-4e0a-b802-067688bc9a2f-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.319655 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzgvb\" (UniqueName: \"kubernetes.io/projected/2c3d4579-619c-4e0a-b802-067688bc9a2f-kube-api-access-tzgvb\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.343589 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.456860 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c79d47c4b-pmkv5"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.458019 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.472614 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c79d47c4b-pmkv5"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500166 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-oauth-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500225 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-oauth-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500346 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500433 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6972\" (UniqueName: \"kubernetes.io/projected/d631b48e-9236-4838-bd5f-618ab96d841c-kube-api-access-b6972\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500507 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-console-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500575 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-trusted-ca-bundle\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.500657 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-service-ca\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601603 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-trusted-ca-bundle\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601654 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-service-ca\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601725 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-oauth-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601745 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-oauth-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601763 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601788 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6972\" (UniqueName: \"kubernetes.io/projected/d631b48e-9236-4838-bd5f-618ab96d841c-kube-api-access-b6972\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.601813 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-console-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.602654 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-console-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.603370 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-trusted-ca-bundle\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.604272 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-oauth-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.604808 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d631b48e-9236-4838-bd5f-618ab96d841c-service-ca\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.609098 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-oauth-config\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.609103 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d631b48e-9236-4838-bd5f-618ab96d841c-console-serving-cert\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.620721 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6972\" (UniqueName: \"kubernetes.io/projected/d631b48e-9236-4838-bd5f-618ab96d841c-kube-api-access-b6972\") pod \"console-5c79d47c4b-pmkv5\" (UID: \"d631b48e-9236-4838-bd5f-618ab96d841c\") " pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.702964 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.706716 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec8ec347-f0ff-4091-a020-c69c4d4d9bda-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rlr2t\" (UID: \"ec8ec347-f0ff-4091-a020-c69c4d4d9bda\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.800842 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.804080 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.810637 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3d4579-619c-4e0a-b802-067688bc9a2f-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mzlgz\" (UID: \"2c3d4579-619c-4e0a-b802-067688bc9a2f\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.817689 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6hsq7"] Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.922903 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:31 crc kubenswrapper[4767]: I0127 16:04:31.998565 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c79d47c4b-pmkv5"] Jan 27 16:04:32 crc kubenswrapper[4767]: W0127 16:04:32.003902 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd631b48e_9236_4838_bd5f_618ab96d841c.slice/crio-19126b86b2ed96ea5e58446e526baa9b062471441b3e2d47ea70b4ad9c131777 WatchSource:0}: Error finding container 19126b86b2ed96ea5e58446e526baa9b062471441b3e2d47ea70b4ad9c131777: Status 404 returned error can't find the container with id 19126b86b2ed96ea5e58446e526baa9b062471441b3e2d47ea70b4ad9c131777 Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.032093 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.122949 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t"] Jan 27 16:04:32 crc kubenswrapper[4767]: W0127 16:04:32.137237 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8ec347_f0ff_4091_a020_c69c4d4d9bda.slice/crio-5262df12cdcb76f9e89cb15b9e02829e46b47e4e41e8bd898bdb8b459515a325 WatchSource:0}: Error finding container 5262df12cdcb76f9e89cb15b9e02829e46b47e4e41e8bd898bdb8b459515a325: Status 404 returned error can't find the container with id 5262df12cdcb76f9e89cb15b9e02829e46b47e4e41e8bd898bdb8b459515a325 Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.251377 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz"] Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.354086 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c79d47c4b-pmkv5" event={"ID":"d631b48e-9236-4838-bd5f-618ab96d841c","Type":"ContainerStarted","Data":"dd5a3d8f14b456c0a8d5d80dfda2f2d353fe87eef77e62cafd1019c64e460510"} Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.354137 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c79d47c4b-pmkv5" event={"ID":"d631b48e-9236-4838-bd5f-618ab96d841c","Type":"ContainerStarted","Data":"19126b86b2ed96ea5e58446e526baa9b062471441b3e2d47ea70b4ad9c131777"} Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.363939 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" event={"ID":"74feff31-d5c9-4aa8-8789-95a64e2811e5","Type":"ContainerStarted","Data":"7aa19ace97bc3ba4d88655ea449d8fa5c26c9dca3cec9e9e8d0e025b501b4112"} Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.365537 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" event={"ID":"ec8ec347-f0ff-4091-a020-c69c4d4d9bda","Type":"ContainerStarted","Data":"5262df12cdcb76f9e89cb15b9e02829e46b47e4e41e8bd898bdb8b459515a325"} Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.371748 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-czz6l" event={"ID":"51f8969c-3967-4f5f-b101-94e942f01395","Type":"ContainerStarted","Data":"576886cb992e5ea8cf504a47cc38cf9daa35e85fe89b146321cae550ae3aab9d"} Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.374778 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c79d47c4b-pmkv5" podStartSLOduration=1.3747531689999999 podStartE2EDuration="1.374753169s" podCreationTimestamp="2026-01-27 16:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:04:32.373750361 +0000 UTC m=+894.762767894" watchObservedRunningTime="2026-01-27 16:04:32.374753169 +0000 UTC m=+894.763770692" Jan 27 16:04:32 crc kubenswrapper[4767]: I0127 16:04:32.378470 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" event={"ID":"2c3d4579-619c-4e0a-b802-067688bc9a2f","Type":"ContainerStarted","Data":"429f331133259c568353cb2b6cbb6dce44fb0f1fa548b40c124262233cb8a018"} Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.402473 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" event={"ID":"ec8ec347-f0ff-4091-a020-c69c4d4d9bda","Type":"ContainerStarted","Data":"c12c2e1e05d74fe7bca9ed5e1cc82511664c044bac6bd954559c781ad79d7d24"} Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.403577 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.404580 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" event={"ID":"2c3d4579-619c-4e0a-b802-067688bc9a2f","Type":"ContainerStarted","Data":"8628160e3cde78fcd6bdf877be28e76eaa55da779a1e4166c8858b68d6b66c39"} Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.406623 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" event={"ID":"74feff31-d5c9-4aa8-8789-95a64e2811e5","Type":"ContainerStarted","Data":"eab9d895ce0656366b97111334bb4828fe1ce12cfad2b93c5a07c1a2cbc51bd1"} Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.408409 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-czz6l" event={"ID":"51f8969c-3967-4f5f-b101-94e942f01395","Type":"ContainerStarted","Data":"323ae084d3ca691c714af99279502a6cd23bf3bd074992313291c8ef21dba093"} Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.408646 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.423130 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" podStartSLOduration=2.5762349970000002 podStartE2EDuration="5.423102116s" podCreationTimestamp="2026-01-27 16:04:30 +0000 UTC" firstStartedPulling="2026-01-27 16:04:32.14039369 +0000 UTC m=+894.529411213" lastFinishedPulling="2026-01-27 16:04:34.987260809 +0000 UTC m=+897.376278332" observedRunningTime="2026-01-27 16:04:35.417532516 +0000 UTC m=+897.806550069" watchObservedRunningTime="2026-01-27 16:04:35.423102116 +0000 UTC m=+897.812119639" Jan 27 16:04:35 crc kubenswrapper[4767]: I0127 16:04:35.442291 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mzlgz" podStartSLOduration=1.7238284780000002 podStartE2EDuration="4.442266755s" podCreationTimestamp="2026-01-27 16:04:31 +0000 UTC" firstStartedPulling="2026-01-27 16:04:32.265443955 +0000 UTC m=+894.654461478" lastFinishedPulling="2026-01-27 16:04:34.983882232 +0000 UTC m=+897.372899755" observedRunningTime="2026-01-27 16:04:35.440240407 +0000 UTC m=+897.829257940" watchObservedRunningTime="2026-01-27 16:04:35.442266755 +0000 UTC m=+897.831284278" Jan 27 16:04:38 crc kubenswrapper[4767]: I0127 16:04:38.360959 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-czz6l" podStartSLOduration=4.754255237 podStartE2EDuration="8.360930322s" podCreationTimestamp="2026-01-27 16:04:30 +0000 UTC" firstStartedPulling="2026-01-27 16:04:31.400362161 +0000 UTC m=+893.789379684" lastFinishedPulling="2026-01-27 16:04:35.007037246 +0000 UTC m=+897.396054769" observedRunningTime="2026-01-27 16:04:35.461062204 +0000 UTC m=+897.850079727" watchObservedRunningTime="2026-01-27 16:04:38.360930322 +0000 UTC m=+900.749947885" Jan 27 16:04:38 crc kubenswrapper[4767]: I0127 16:04:38.429945 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" event={"ID":"74feff31-d5c9-4aa8-8789-95a64e2811e5","Type":"ContainerStarted","Data":"234a2d104a69827bdf8071b28cbb331cea475217cf656c78b0b5cf12dde4916f"} Jan 27 16:04:38 crc kubenswrapper[4767]: I0127 16:04:38.450331 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-6hsq7" podStartSLOduration=2.537097265 podStartE2EDuration="8.450311385s" podCreationTimestamp="2026-01-27 16:04:30 +0000 UTC" firstStartedPulling="2026-01-27 16:04:31.82196526 +0000 UTC m=+894.210982803" lastFinishedPulling="2026-01-27 16:04:37.7351794 +0000 UTC m=+900.124196923" observedRunningTime="2026-01-27 16:04:38.449739379 +0000 UTC m=+900.838756942" watchObservedRunningTime="2026-01-27 16:04:38.450311385 +0000 UTC m=+900.839328908" Jan 27 16:04:41 crc kubenswrapper[4767]: I0127 16:04:41.379611 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-czz6l" Jan 27 16:04:41 crc kubenswrapper[4767]: I0127 16:04:41.801297 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:41 crc kubenswrapper[4767]: I0127 16:04:41.801724 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:41 crc kubenswrapper[4767]: I0127 16:04:41.808730 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:42 crc kubenswrapper[4767]: I0127 16:04:42.462726 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c79d47c4b-pmkv5" Jan 27 16:04:42 crc kubenswrapper[4767]: I0127 16:04:42.518052 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 16:04:51 crc kubenswrapper[4767]: I0127 16:04:51.930420 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rlr2t" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.028865 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.031415 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.044873 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.229360 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6gj\" (UniqueName: \"kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.229429 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.229451 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.330218 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6gj\" (UniqueName: \"kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.330312 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.330343 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.330874 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.330942 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.350363 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6gj\" (UniqueName: \"kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj\") pod \"redhat-marketplace-wc5nx\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:01 crc kubenswrapper[4767]: I0127 16:05:01.646991 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:02 crc kubenswrapper[4767]: I0127 16:05:02.070700 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:02 crc kubenswrapper[4767]: I0127 16:05:02.584803 4767 generic.go:334] "Generic (PLEG): container finished" podID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerID="148a0fd184ba67273b3ee2f43d6a4cd0b14f2cd40e9b395f9699b7e40d5ec6b4" exitCode=0 Jan 27 16:05:02 crc kubenswrapper[4767]: I0127 16:05:02.584908 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerDied","Data":"148a0fd184ba67273b3ee2f43d6a4cd0b14f2cd40e9b395f9699b7e40d5ec6b4"} Jan 27 16:05:02 crc kubenswrapper[4767]: I0127 16:05:02.585298 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerStarted","Data":"dbda4a80f2888d06d2bf54b8668dfc0cf1a4ead74ceed494d81b2a6040a6b12e"} Jan 27 16:05:04 crc kubenswrapper[4767]: I0127 16:05:04.611331 4767 generic.go:334] "Generic (PLEG): container finished" podID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerID="96db7b0f570c4ae61e35a4f912e4808acf2ca234d262b2863ffd63cab74e6dbb" exitCode=0 Jan 27 16:05:04 crc kubenswrapper[4767]: I0127 16:05:04.611463 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerDied","Data":"96db7b0f570c4ae61e35a4f912e4808acf2ca234d262b2863ffd63cab74e6dbb"} Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.638788 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n"] Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.639952 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.642460 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.654863 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n"] Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.685646 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.685841 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.685862 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tsd\" (UniqueName: \"kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.788926 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.788981 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tsd\" (UniqueName: \"kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.789029 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.789611 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.789766 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.813274 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tsd\" (UniqueName: \"kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:05 crc kubenswrapper[4767]: I0127 16:05:05.958687 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.166369 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n"] Jan 27 16:05:06 crc kubenswrapper[4767]: W0127 16:05:06.182322 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b999b3a_a946_45fe_8601_ed762f22e5c1.slice/crio-76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165 WatchSource:0}: Error finding container 76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165: Status 404 returned error can't find the container with id 76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165 Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.626504 4767 generic.go:334] "Generic (PLEG): container finished" podID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerID="e5f0b1bcbc8bfc1eb58d2cbfb2d4644c475da8d07c718cbc5a69048348fde0e3" exitCode=0 Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.626568 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" event={"ID":"4b999b3a-a946-45fe-8601-ed762f22e5c1","Type":"ContainerDied","Data":"e5f0b1bcbc8bfc1eb58d2cbfb2d4644c475da8d07c718cbc5a69048348fde0e3"} Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.626596 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" event={"ID":"4b999b3a-a946-45fe-8601-ed762f22e5c1","Type":"ContainerStarted","Data":"76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165"} Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.629699 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerStarted","Data":"4b83be6057ad75111424bb22f0fc1a4db3fdee0d98dfeef092ccc974270ec228"} Jan 27 16:05:06 crc kubenswrapper[4767]: I0127 16:05:06.669363 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc5nx" podStartSLOduration=2.962439966 podStartE2EDuration="6.669342464s" podCreationTimestamp="2026-01-27 16:05:00 +0000 UTC" firstStartedPulling="2026-01-27 16:05:02.586395063 +0000 UTC m=+924.975412586" lastFinishedPulling="2026-01-27 16:05:06.293297561 +0000 UTC m=+928.682315084" observedRunningTime="2026-01-27 16:05:06.666358198 +0000 UTC m=+929.055375721" watchObservedRunningTime="2026-01-27 16:05:06.669342464 +0000 UTC m=+929.058359987" Jan 27 16:05:07 crc kubenswrapper[4767]: I0127 16:05:07.569776 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vxkdk" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" containerID="cri-o://d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5" gracePeriod=15 Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.110464 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vxkdk_90596a9c-3db0-47e4-a002-a97cd73f2ab9/console/0.log" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.110698 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224445 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224502 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224533 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224551 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkm4f\" (UniqueName: \"kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224572 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224597 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.224639 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config\") pod \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\" (UID: \"90596a9c-3db0-47e4-a002-a97cd73f2ab9\") " Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.225458 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config" (OuterVolumeSpecName: "console-config") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.225730 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.226045 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.226551 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca" (OuterVolumeSpecName: "service-ca") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.230858 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f" (OuterVolumeSpecName: "kube-api-access-zkm4f") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "kube-api-access-zkm4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.231082 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.231782 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "90596a9c-3db0-47e4-a002-a97cd73f2ab9" (UID: "90596a9c-3db0-47e4-a002-a97cd73f2ab9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.325748 4767 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326097 4767 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326231 4767 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/90596a9c-3db0-47e4-a002-a97cd73f2ab9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326337 4767 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326469 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkm4f\" (UniqueName: \"kubernetes.io/projected/90596a9c-3db0-47e4-a002-a97cd73f2ab9-kube-api-access-zkm4f\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326576 4767 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.326664 4767 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90596a9c-3db0-47e4-a002-a97cd73f2ab9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644446 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vxkdk_90596a9c-3db0-47e4-a002-a97cd73f2ab9/console/0.log" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644514 4767 generic.go:334] "Generic (PLEG): container finished" podID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerID="d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5" exitCode=2 Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644608 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vxkdk" event={"ID":"90596a9c-3db0-47e4-a002-a97cd73f2ab9","Type":"ContainerDied","Data":"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5"} Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644665 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vxkdk" event={"ID":"90596a9c-3db0-47e4-a002-a97cd73f2ab9","Type":"ContainerDied","Data":"e4c52c307ead7c3c48ef164c785632647295f714f3938cbbaaa8e2d05a805056"} Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644620 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vxkdk" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.644713 4767 scope.go:117] "RemoveContainer" containerID="d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.646865 4767 generic.go:334] "Generic (PLEG): container finished" podID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerID="16d60ccf8e6da8b0aad34e3730e7d1c79357d93b7061dec3eed7f42e92815b52" exitCode=0 Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.646903 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" event={"ID":"4b999b3a-a946-45fe-8601-ed762f22e5c1","Type":"ContainerDied","Data":"16d60ccf8e6da8b0aad34e3730e7d1c79357d93b7061dec3eed7f42e92815b52"} Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.668838 4767 scope.go:117] "RemoveContainer" containerID="d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5" Jan 27 16:05:08 crc kubenswrapper[4767]: E0127 16:05:08.669371 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5\": container with ID starting with d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5 not found: ID does not exist" containerID="d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.669398 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5"} err="failed to get container status \"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5\": rpc error: code = NotFound desc = could not find container \"d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5\": container with ID starting with d7e6549e037f00301940b5243a10cd71a0e3116c74030a1d4ab224e5987730c5 not found: ID does not exist" Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.693943 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 16:05:08 crc kubenswrapper[4767]: I0127 16:05:08.699812 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vxkdk"] Jan 27 16:05:09 crc kubenswrapper[4767]: I0127 16:05:09.659391 4767 generic.go:334] "Generic (PLEG): container finished" podID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerID="0a915dfb353d0370f722cf3b99fe1197e42583b9e9ae2d1bdfafd7696bc75432" exitCode=0 Jan 27 16:05:09 crc kubenswrapper[4767]: I0127 16:05:09.659497 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" event={"ID":"4b999b3a-a946-45fe-8601-ed762f22e5c1","Type":"ContainerDied","Data":"0a915dfb353d0370f722cf3b99fe1197e42583b9e9ae2d1bdfafd7696bc75432"} Jan 27 16:05:10 crc kubenswrapper[4767]: I0127 16:05:10.339230 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" path="/var/lib/kubelet/pods/90596a9c-3db0-47e4-a002-a97cd73f2ab9/volumes" Jan 27 16:05:10 crc kubenswrapper[4767]: I0127 16:05:10.919466 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.081708 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6tsd\" (UniqueName: \"kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd\") pod \"4b999b3a-a946-45fe-8601-ed762f22e5c1\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.081758 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util\") pod \"4b999b3a-a946-45fe-8601-ed762f22e5c1\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.081829 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle\") pod \"4b999b3a-a946-45fe-8601-ed762f22e5c1\" (UID: \"4b999b3a-a946-45fe-8601-ed762f22e5c1\") " Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.082830 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle" (OuterVolumeSpecName: "bundle") pod "4b999b3a-a946-45fe-8601-ed762f22e5c1" (UID: "4b999b3a-a946-45fe-8601-ed762f22e5c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.088096 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd" (OuterVolumeSpecName: "kube-api-access-q6tsd") pod "4b999b3a-a946-45fe-8601-ed762f22e5c1" (UID: "4b999b3a-a946-45fe-8601-ed762f22e5c1"). InnerVolumeSpecName "kube-api-access-q6tsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.096023 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util" (OuterVolumeSpecName: "util") pod "4b999b3a-a946-45fe-8601-ed762f22e5c1" (UID: "4b999b3a-a946-45fe-8601-ed762f22e5c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.183342 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6tsd\" (UniqueName: \"kubernetes.io/projected/4b999b3a-a946-45fe-8601-ed762f22e5c1-kube-api-access-q6tsd\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.183373 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-util\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.183382 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4b999b3a-a946-45fe-8601-ed762f22e5c1-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.647160 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.647666 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.672957 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" event={"ID":"4b999b3a-a946-45fe-8601-ed762f22e5c1","Type":"ContainerDied","Data":"76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165"} Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.673003 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76863ec462b358a4f3d4fcf0475016b8977daf0282d11d41ea178bc84a3af165" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.673018 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.694616 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:11 crc kubenswrapper[4767]: I0127 16:05:11.734951 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:13 crc kubenswrapper[4767]: I0127 16:05:13.994815 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:13 crc kubenswrapper[4767]: I0127 16:05:13.996158 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc5nx" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="registry-server" containerID="cri-o://4b83be6057ad75111424bb22f0fc1a4db3fdee0d98dfeef092ccc974270ec228" gracePeriod=2 Jan 27 16:05:14 crc kubenswrapper[4767]: I0127 16:05:14.689723 4767 generic.go:334] "Generic (PLEG): container finished" podID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerID="4b83be6057ad75111424bb22f0fc1a4db3fdee0d98dfeef092ccc974270ec228" exitCode=0 Jan 27 16:05:14 crc kubenswrapper[4767]: I0127 16:05:14.689803 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerDied","Data":"4b83be6057ad75111424bb22f0fc1a4db3fdee0d98dfeef092ccc974270ec228"} Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.366808 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.540384 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities\") pod \"83c0ea12-1aa2-4a72-8613-81a857c72fae\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.540442 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content\") pod \"83c0ea12-1aa2-4a72-8613-81a857c72fae\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.540510 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb6gj\" (UniqueName: \"kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj\") pod \"83c0ea12-1aa2-4a72-8613-81a857c72fae\" (UID: \"83c0ea12-1aa2-4a72-8613-81a857c72fae\") " Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.552695 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities" (OuterVolumeSpecName: "utilities") pod "83c0ea12-1aa2-4a72-8613-81a857c72fae" (UID: "83c0ea12-1aa2-4a72-8613-81a857c72fae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.566432 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83c0ea12-1aa2-4a72-8613-81a857c72fae" (UID: "83c0ea12-1aa2-4a72-8613-81a857c72fae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.587477 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj" (OuterVolumeSpecName: "kube-api-access-wb6gj") pod "83c0ea12-1aa2-4a72-8613-81a857c72fae" (UID: "83c0ea12-1aa2-4a72-8613-81a857c72fae"). InnerVolumeSpecName "kube-api-access-wb6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.641707 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.641763 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb6gj\" (UniqueName: \"kubernetes.io/projected/83c0ea12-1aa2-4a72-8613-81a857c72fae-kube-api-access-wb6gj\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.641779 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c0ea12-1aa2-4a72-8613-81a857c72fae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.697792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc5nx" event={"ID":"83c0ea12-1aa2-4a72-8613-81a857c72fae","Type":"ContainerDied","Data":"dbda4a80f2888d06d2bf54b8668dfc0cf1a4ead74ceed494d81b2a6040a6b12e"} Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.697835 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc5nx" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.697870 4767 scope.go:117] "RemoveContainer" containerID="4b83be6057ad75111424bb22f0fc1a4db3fdee0d98dfeef092ccc974270ec228" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.742984 4767 scope.go:117] "RemoveContainer" containerID="96db7b0f570c4ae61e35a4f912e4808acf2ca234d262b2863ffd63cab74e6dbb" Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.750150 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.756918 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc5nx"] Jan 27 16:05:15 crc kubenswrapper[4767]: I0127 16:05:15.766947 4767 scope.go:117] "RemoveContainer" containerID="148a0fd184ba67273b3ee2f43d6a4cd0b14f2cd40e9b395f9699b7e40d5ec6b4" Jan 27 16:05:16 crc kubenswrapper[4767]: I0127 16:05:16.331424 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" path="/var/lib/kubelet/pods/83c0ea12-1aa2-4a72-8613-81a857c72fae/volumes" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.111935 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-844f686c44-k5sth"] Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112717 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112729 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112739 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="extract-content" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112747 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="extract-content" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112757 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="pull" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112764 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="pull" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112772 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="util" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112778 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="util" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112786 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="registry-server" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112791 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="registry-server" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112804 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="extract" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112810 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="extract" Jan 27 16:05:22 crc kubenswrapper[4767]: E0127 16:05:22.112818 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="extract-utilities" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112824 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="extract-utilities" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112908 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c0ea12-1aa2-4a72-8613-81a857c72fae" containerName="registry-server" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112926 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b999b3a-a946-45fe-8601-ed762f22e5c1" containerName="extract" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.112934 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="90596a9c-3db0-47e4-a002-a97cd73f2ab9" containerName="console" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.113336 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.120598 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.120688 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.121218 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.121270 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.121440 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7h9cs" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.137572 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-844f686c44-k5sth"] Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.223439 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-webhook-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.223500 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-apiservice-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.223535 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99jj\" (UniqueName: \"kubernetes.io/projected/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-kube-api-access-s99jj\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.325190 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-webhook-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.325264 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-apiservice-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.325301 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99jj\" (UniqueName: \"kubernetes.io/projected/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-kube-api-access-s99jj\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.333775 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-webhook-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.334011 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-apiservice-cert\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.349782 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99jj\" (UniqueName: \"kubernetes.io/projected/9fe0bf56-5fc0-4fbf-a0e5-a372cb365905-kube-api-access-s99jj\") pod \"metallb-operator-controller-manager-844f686c44-k5sth\" (UID: \"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905\") " pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.371096 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj"] Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.372132 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.377008 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.377844 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v85ks" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.378546 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.396453 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj"] Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.429536 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.527280 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-webhook-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.527394 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2mks\" (UniqueName: \"kubernetes.io/projected/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-kube-api-access-q2mks\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.527425 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-apiservice-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.634819 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-webhook-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.634891 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2mks\" (UniqueName: \"kubernetes.io/projected/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-kube-api-access-q2mks\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.634914 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-apiservice-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.664215 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2mks\" (UniqueName: \"kubernetes.io/projected/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-kube-api-access-q2mks\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.665235 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-webhook-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.665260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a13e440-ff73-4cd7-9759-6ec6c9f7779c-apiservice-cert\") pod \"metallb-operator-webhook-server-596bfd7f57-hl2fj\" (UID: \"0a13e440-ff73-4cd7-9759-6ec6c9f7779c\") " pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.688316 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.784295 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-844f686c44-k5sth"] Jan 27 16:05:22 crc kubenswrapper[4767]: I0127 16:05:22.944018 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj"] Jan 27 16:05:22 crc kubenswrapper[4767]: W0127 16:05:22.950356 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a13e440_ff73_4cd7_9759_6ec6c9f7779c.slice/crio-981556551c502858ddeba238a4554cd73229dac65f507e6d1e35d2a55c849f34 WatchSource:0}: Error finding container 981556551c502858ddeba238a4554cd73229dac65f507e6d1e35d2a55c849f34: Status 404 returned error can't find the container with id 981556551c502858ddeba238a4554cd73229dac65f507e6d1e35d2a55c849f34 Jan 27 16:05:23 crc kubenswrapper[4767]: I0127 16:05:23.743460 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" event={"ID":"0a13e440-ff73-4cd7-9759-6ec6c9f7779c","Type":"ContainerStarted","Data":"981556551c502858ddeba238a4554cd73229dac65f507e6d1e35d2a55c849f34"} Jan 27 16:05:23 crc kubenswrapper[4767]: I0127 16:05:23.745078 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" event={"ID":"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905","Type":"ContainerStarted","Data":"ee7704b2b314eef5abc5c20e983c1715de0ee69c36c2c49d4754f3e1cebe44c8"} Jan 27 16:05:28 crc kubenswrapper[4767]: I0127 16:05:28.778844 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" event={"ID":"0a13e440-ff73-4cd7-9759-6ec6c9f7779c","Type":"ContainerStarted","Data":"9add84ba050881ddde76bf5830cbe09ede66f95a9d4b37bcd80e3ff8ce55365e"} Jan 27 16:05:28 crc kubenswrapper[4767]: I0127 16:05:28.779403 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:28 crc kubenswrapper[4767]: I0127 16:05:28.800744 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" podStartSLOduration=1.853381653 podStartE2EDuration="6.800725669s" podCreationTimestamp="2026-01-27 16:05:22 +0000 UTC" firstStartedPulling="2026-01-27 16:05:22.952929644 +0000 UTC m=+945.341947167" lastFinishedPulling="2026-01-27 16:05:27.90027366 +0000 UTC m=+950.289291183" observedRunningTime="2026-01-27 16:05:28.79657533 +0000 UTC m=+951.185592863" watchObservedRunningTime="2026-01-27 16:05:28.800725669 +0000 UTC m=+951.189743182" Jan 27 16:05:35 crc kubenswrapper[4767]: I0127 16:05:35.817514 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" event={"ID":"9fe0bf56-5fc0-4fbf-a0e5-a372cb365905","Type":"ContainerStarted","Data":"1e280c0c772a7af1c0e189331e1ba232b9389816813758b28de6451a05e38d12"} Jan 27 16:05:35 crc kubenswrapper[4767]: I0127 16:05:35.818102 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:05:35 crc kubenswrapper[4767]: I0127 16:05:35.845131 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" podStartSLOduration=1.2919807159999999 podStartE2EDuration="13.845110113s" podCreationTimestamp="2026-01-27 16:05:22 +0000 UTC" firstStartedPulling="2026-01-27 16:05:22.811046496 +0000 UTC m=+945.200064019" lastFinishedPulling="2026-01-27 16:05:35.364175893 +0000 UTC m=+957.753193416" observedRunningTime="2026-01-27 16:05:35.837302299 +0000 UTC m=+958.226319862" watchObservedRunningTime="2026-01-27 16:05:35.845110113 +0000 UTC m=+958.234127656" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.052838 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.054386 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.083545 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.131812 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.131934 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.131972 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdcr2\" (UniqueName: \"kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.234175 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.234322 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.234364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdcr2\" (UniqueName: \"kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.234940 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.234992 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.255649 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdcr2\" (UniqueName: \"kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2\") pod \"community-operators-vp72t\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.378566 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:36 crc kubenswrapper[4767]: I0127 16:05:36.817879 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:37 crc kubenswrapper[4767]: I0127 16:05:37.842560 4767 generic.go:334] "Generic (PLEG): container finished" podID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerID="11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94" exitCode=0 Jan 27 16:05:37 crc kubenswrapper[4767]: I0127 16:05:37.842619 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerDied","Data":"11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94"} Jan 27 16:05:37 crc kubenswrapper[4767]: I0127 16:05:37.842999 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerStarted","Data":"d7f8dbc114ec2719ceee5519b97270e62bc1b7d7d9d64dca91abdde0f4cb7c48"} Jan 27 16:05:38 crc kubenswrapper[4767]: I0127 16:05:38.852755 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerStarted","Data":"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69"} Jan 27 16:05:39 crc kubenswrapper[4767]: I0127 16:05:39.862266 4767 generic.go:334] "Generic (PLEG): container finished" podID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerID="b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69" exitCode=0 Jan 27 16:05:39 crc kubenswrapper[4767]: I0127 16:05:39.862350 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerDied","Data":"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69"} Jan 27 16:05:40 crc kubenswrapper[4767]: I0127 16:05:40.871883 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerStarted","Data":"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7"} Jan 27 16:05:40 crc kubenswrapper[4767]: I0127 16:05:40.891094 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vp72t" podStartSLOduration=2.4308459940000002 podStartE2EDuration="4.891077056s" podCreationTimestamp="2026-01-27 16:05:36 +0000 UTC" firstStartedPulling="2026-01-27 16:05:37.844290615 +0000 UTC m=+960.233308158" lastFinishedPulling="2026-01-27 16:05:40.304521697 +0000 UTC m=+962.693539220" observedRunningTime="2026-01-27 16:05:40.885681511 +0000 UTC m=+963.274699034" watchObservedRunningTime="2026-01-27 16:05:40.891077056 +0000 UTC m=+963.280094579" Jan 27 16:05:42 crc kubenswrapper[4767]: I0127 16:05:42.692311 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-596bfd7f57-hl2fj" Jan 27 16:05:46 crc kubenswrapper[4767]: I0127 16:05:46.379398 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:46 crc kubenswrapper[4767]: I0127 16:05:46.379652 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:46 crc kubenswrapper[4767]: I0127 16:05:46.420209 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:46 crc kubenswrapper[4767]: I0127 16:05:46.953660 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:46 crc kubenswrapper[4767]: I0127 16:05:46.989436 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:48 crc kubenswrapper[4767]: I0127 16:05:48.930040 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vp72t" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="registry-server" containerID="cri-o://a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7" gracePeriod=2 Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.279135 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.440548 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities\") pod \"dabee4e2-e6b6-4a25-8ab9-bee476131561\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.440666 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content\") pod \"dabee4e2-e6b6-4a25-8ab9-bee476131561\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.440734 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdcr2\" (UniqueName: \"kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2\") pod \"dabee4e2-e6b6-4a25-8ab9-bee476131561\" (UID: \"dabee4e2-e6b6-4a25-8ab9-bee476131561\") " Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.441429 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities" (OuterVolumeSpecName: "utilities") pod "dabee4e2-e6b6-4a25-8ab9-bee476131561" (UID: "dabee4e2-e6b6-4a25-8ab9-bee476131561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.449436 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2" (OuterVolumeSpecName: "kube-api-access-cdcr2") pod "dabee4e2-e6b6-4a25-8ab9-bee476131561" (UID: "dabee4e2-e6b6-4a25-8ab9-bee476131561"). InnerVolumeSpecName "kube-api-access-cdcr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.493526 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dabee4e2-e6b6-4a25-8ab9-bee476131561" (UID: "dabee4e2-e6b6-4a25-8ab9-bee476131561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.543479 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.543562 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dabee4e2-e6b6-4a25-8ab9-bee476131561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.543594 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdcr2\" (UniqueName: \"kubernetes.io/projected/dabee4e2-e6b6-4a25-8ab9-bee476131561-kube-api-access-cdcr2\") on node \"crc\" DevicePath \"\"" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.937566 4767 generic.go:334] "Generic (PLEG): container finished" podID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerID="a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7" exitCode=0 Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.937612 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp72t" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.937626 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerDied","Data":"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7"} Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.937783 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp72t" event={"ID":"dabee4e2-e6b6-4a25-8ab9-bee476131561","Type":"ContainerDied","Data":"d7f8dbc114ec2719ceee5519b97270e62bc1b7d7d9d64dca91abdde0f4cb7c48"} Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.937828 4767 scope.go:117] "RemoveContainer" containerID="a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.956185 4767 scope.go:117] "RemoveContainer" containerID="b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69" Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.970947 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.975856 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vp72t"] Jan 27 16:05:49 crc kubenswrapper[4767]: I0127 16:05:49.998865 4767 scope.go:117] "RemoveContainer" containerID="11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.013472 4767 scope.go:117] "RemoveContainer" containerID="a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7" Jan 27 16:05:50 crc kubenswrapper[4767]: E0127 16:05:50.013941 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7\": container with ID starting with a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7 not found: ID does not exist" containerID="a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.013974 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7"} err="failed to get container status \"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7\": rpc error: code = NotFound desc = could not find container \"a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7\": container with ID starting with a3b6144cda226f463f6277c8d354e0103d992d3f04dc63c3e047f6b59e58eda7 not found: ID does not exist" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.014012 4767 scope.go:117] "RemoveContainer" containerID="b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69" Jan 27 16:05:50 crc kubenswrapper[4767]: E0127 16:05:50.014452 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69\": container with ID starting with b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69 not found: ID does not exist" containerID="b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.014518 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69"} err="failed to get container status \"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69\": rpc error: code = NotFound desc = could not find container \"b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69\": container with ID starting with b4b32cd0000b4088be31d63e8faaed6deb707f8990c05d5014292c7702de5d69 not found: ID does not exist" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.014563 4767 scope.go:117] "RemoveContainer" containerID="11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94" Jan 27 16:05:50 crc kubenswrapper[4767]: E0127 16:05:50.014921 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94\": container with ID starting with 11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94 not found: ID does not exist" containerID="11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.014953 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94"} err="failed to get container status \"11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94\": rpc error: code = NotFound desc = could not find container \"11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94\": container with ID starting with 11e16b448aeb8950e02ba03917f40d13e2b0c85df8e25df461f92b946863bb94 not found: ID does not exist" Jan 27 16:05:50 crc kubenswrapper[4767]: I0127 16:05:50.335365 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" path="/var/lib/kubelet/pods/dabee4e2-e6b6-4a25-8ab9-bee476131561/volumes" Jan 27 16:06:12 crc kubenswrapper[4767]: I0127 16:06:12.432879 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-844f686c44-k5sth" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.278784 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k4csc"] Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.279107 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="extract-utilities" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.279129 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="extract-utilities" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.279148 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="registry-server" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.279157 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="registry-server" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.279167 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="extract-content" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.279176 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="extract-content" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.279344 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabee4e2-e6b6-4a25-8ab9-bee476131561" containerName="registry-server" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.281920 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.284605 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.289781 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.289829 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pzkct" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.303033 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67"] Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.304659 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.310395 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.322335 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67"] Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.384105 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cphkn"] Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.385474 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.388247 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-b2mms" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.389074 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.389977 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.389996 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.407046 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-fs7mv"] Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.408630 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.411103 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.424282 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-fs7mv"] Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445449 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-sockets\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445524 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-startup\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445678 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkm2j\" (UniqueName: \"kubernetes.io/projected/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-kube-api-access-nkm2j\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445722 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445786 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445877 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445931 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-conf\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.445980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc82z\" (UniqueName: \"kubernetes.io/projected/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-kube-api-access-mc82z\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.446015 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-reloader\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.547607 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-startup\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.547966 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfwd\" (UniqueName: \"kubernetes.io/projected/0974a95e-83af-4ab7-95de-b7ea1211884f-kube-api-access-7nfwd\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548093 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-metrics-certs\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548299 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkm2j\" (UniqueName: \"kubernetes.io/projected/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-kube-api-access-nkm2j\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548427 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-cert\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548477 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548531 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-metrics-certs\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548557 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548607 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548655 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-conf\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.548661 4767 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.548734 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs podName:bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d nodeName:}" failed. No retries permitted until 2026-01-27 16:06:14.048710204 +0000 UTC m=+996.437727727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs") pod "frr-k8s-k4csc" (UID: "bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d") : secret "frr-k8s-certs-secret" not found Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548765 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc82z\" (UniqueName: \"kubernetes.io/projected/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-kube-api-access-mc82z\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548778 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-startup\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.548800 4767 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548796 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-reloader\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.548891 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert podName:0ec154cf-23c7-4b7f-acc3-33c56d7e4cae nodeName:}" failed. No retries permitted until 2026-01-27 16:06:14.048849848 +0000 UTC m=+996.437867371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert") pod "frr-k8s-webhook-server-7df86c4f6c-6gj67" (UID: "0ec154cf-23c7-4b7f-acc3-33c56d7e4cae") : secret "frr-k8s-webhook-server-cert" not found Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548919 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-sockets\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548948 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9pxg\" (UniqueName: \"kubernetes.io/projected/fff14b52-109f-4cc6-9361-a577bdcfb615-kube-api-access-c9pxg\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.548979 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0974a95e-83af-4ab7-95de-b7ea1211884f-metallb-excludel2\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.549041 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-reloader\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.549045 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-conf\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.549230 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-frr-sockets\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.549689 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.567016 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkm2j\" (UniqueName: \"kubernetes.io/projected/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-kube-api-access-nkm2j\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.567805 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc82z\" (UniqueName: \"kubernetes.io/projected/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-kube-api-access-mc82z\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650149 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-cert\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650241 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-metrics-certs\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650268 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650338 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9pxg\" (UniqueName: \"kubernetes.io/projected/fff14b52-109f-4cc6-9361-a577bdcfb615-kube-api-access-c9pxg\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650364 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0974a95e-83af-4ab7-95de-b7ea1211884f-metallb-excludel2\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650397 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nfwd\" (UniqueName: \"kubernetes.io/projected/0974a95e-83af-4ab7-95de-b7ea1211884f-kube-api-access-7nfwd\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.650418 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-metrics-certs\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.650732 4767 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 16:06:13 crc kubenswrapper[4767]: E0127 16:06:13.650800 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist podName:0974a95e-83af-4ab7-95de-b7ea1211884f nodeName:}" failed. No retries permitted until 2026-01-27 16:06:14.150782443 +0000 UTC m=+996.539799966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist") pod "speaker-cphkn" (UID: "0974a95e-83af-4ab7-95de-b7ea1211884f") : secret "metallb-memberlist" not found Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.651434 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0974a95e-83af-4ab7-95de-b7ea1211884f-metallb-excludel2\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.653829 4767 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.654225 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-metrics-certs\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.654659 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-metrics-certs\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.665602 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff14b52-109f-4cc6-9361-a577bdcfb615-cert\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.670755 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nfwd\" (UniqueName: \"kubernetes.io/projected/0974a95e-83af-4ab7-95de-b7ea1211884f-kube-api-access-7nfwd\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.672133 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9pxg\" (UniqueName: \"kubernetes.io/projected/fff14b52-109f-4cc6-9361-a577bdcfb615-kube-api-access-c9pxg\") pod \"controller-6968d8fdc4-fs7mv\" (UID: \"fff14b52-109f-4cc6-9361-a577bdcfb615\") " pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:13 crc kubenswrapper[4767]: I0127 16:06:13.732867 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.056445 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.056919 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.060947 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d-metrics-certs\") pod \"frr-k8s-k4csc\" (UID: \"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d\") " pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.063012 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0ec154cf-23c7-4b7f-acc3-33c56d7e4cae-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6gj67\" (UID: \"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.140971 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-fs7mv"] Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.158450 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:14 crc kubenswrapper[4767]: E0127 16:06:14.158603 4767 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 16:06:14 crc kubenswrapper[4767]: E0127 16:06:14.158684 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist podName:0974a95e-83af-4ab7-95de-b7ea1211884f nodeName:}" failed. No retries permitted until 2026-01-27 16:06:15.158661305 +0000 UTC m=+997.547678848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist") pod "speaker-cphkn" (UID: "0974a95e-83af-4ab7-95de-b7ea1211884f") : secret "metallb-memberlist" not found Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.204951 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.222540 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:14 crc kubenswrapper[4767]: I0127 16:06:14.469246 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67"] Jan 27 16:06:14 crc kubenswrapper[4767]: W0127 16:06:14.471486 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec154cf_23c7_4b7f_acc3_33c56d7e4cae.slice/crio-30220aa84fe6924cd4619835449e22628ddaa0ee3ba3b02d5304250ac314629d WatchSource:0}: Error finding container 30220aa84fe6924cd4619835449e22628ddaa0ee3ba3b02d5304250ac314629d: Status 404 returned error can't find the container with id 30220aa84fe6924cd4619835449e22628ddaa0ee3ba3b02d5304250ac314629d Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.089274 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-fs7mv" event={"ID":"fff14b52-109f-4cc6-9361-a577bdcfb615","Type":"ContainerStarted","Data":"d0d85c9547ccb9cd5e9f41961711c7f90d7c83b4bd3669269b9f89bca3d3b90b"} Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.089324 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-fs7mv" event={"ID":"fff14b52-109f-4cc6-9361-a577bdcfb615","Type":"ContainerStarted","Data":"e47aab929faccbdc6c61c0080ce4284b424b2eb552e734ef752f80a8495ff57a"} Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.089338 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-fs7mv" event={"ID":"fff14b52-109f-4cc6-9361-a577bdcfb615","Type":"ContainerStarted","Data":"b85c6d7a7838661f1aedbc75e6194a3ea38ee7047339ba408ce7aaa6d4f8c39d"} Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.089394 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.090974 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"934cf53d4f1fc5c632d412cce645ada47b512400b0fcf361c48a272e5a9354eb"} Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.092599 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" event={"ID":"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae","Type":"ContainerStarted","Data":"30220aa84fe6924cd4619835449e22628ddaa0ee3ba3b02d5304250ac314629d"} Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.109540 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-fs7mv" podStartSLOduration=2.109517871 podStartE2EDuration="2.109517871s" podCreationTimestamp="2026-01-27 16:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:15.10460982 +0000 UTC m=+997.493627423" watchObservedRunningTime="2026-01-27 16:06:15.109517871 +0000 UTC m=+997.498535404" Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.173163 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.179851 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0974a95e-83af-4ab7-95de-b7ea1211884f-memberlist\") pod \"speaker-cphkn\" (UID: \"0974a95e-83af-4ab7-95de-b7ea1211884f\") " pod="metallb-system/speaker-cphkn" Jan 27 16:06:15 crc kubenswrapper[4767]: I0127 16:06:15.208894 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cphkn" Jan 27 16:06:16 crc kubenswrapper[4767]: I0127 16:06:16.104940 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cphkn" event={"ID":"0974a95e-83af-4ab7-95de-b7ea1211884f","Type":"ContainerStarted","Data":"de731bae4c4bffb47e24199dc90dd8bf26b9907c3054e45c8641835a4b591eea"} Jan 27 16:06:16 crc kubenswrapper[4767]: I0127 16:06:16.105488 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cphkn" event={"ID":"0974a95e-83af-4ab7-95de-b7ea1211884f","Type":"ContainerStarted","Data":"a3fea588bd635f5a11b0a9e93ff2c8e1af233f9a7dae830274ca8c89484978c7"} Jan 27 16:06:17 crc kubenswrapper[4767]: I0127 16:06:17.125885 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cphkn" event={"ID":"0974a95e-83af-4ab7-95de-b7ea1211884f","Type":"ContainerStarted","Data":"c41eaf14709ef915ac5da6dd03a6a54fcdac90007429601aa94430321fb136e4"} Jan 27 16:06:17 crc kubenswrapper[4767]: I0127 16:06:17.126361 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cphkn" Jan 27 16:06:17 crc kubenswrapper[4767]: I0127 16:06:17.151449 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cphkn" podStartSLOduration=4.15143123 podStartE2EDuration="4.15143123s" podCreationTimestamp="2026-01-27 16:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:06:17.147550268 +0000 UTC m=+999.536567811" watchObservedRunningTime="2026-01-27 16:06:17.15143123 +0000 UTC m=+999.540448743" Jan 27 16:06:23 crc kubenswrapper[4767]: I0127 16:06:23.170737 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" event={"ID":"0ec154cf-23c7-4b7f-acc3-33c56d7e4cae","Type":"ContainerStarted","Data":"9108e95a692db0136701a4a5d716b9616a7b97ec65edaf3a54c89b189cabab4c"} Jan 27 16:06:23 crc kubenswrapper[4767]: I0127 16:06:23.171192 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:23 crc kubenswrapper[4767]: I0127 16:06:23.172342 4767 generic.go:334] "Generic (PLEG): container finished" podID="bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d" containerID="def018ea42064ba6e7c1400a652a671873d5fd77f84a5aa885b66dc4f450292f" exitCode=0 Jan 27 16:06:23 crc kubenswrapper[4767]: I0127 16:06:23.172366 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerDied","Data":"def018ea42064ba6e7c1400a652a671873d5fd77f84a5aa885b66dc4f450292f"} Jan 27 16:06:23 crc kubenswrapper[4767]: I0127 16:06:23.186542 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" podStartSLOduration=2.136749155 podStartE2EDuration="10.186523978s" podCreationTimestamp="2026-01-27 16:06:13 +0000 UTC" firstStartedPulling="2026-01-27 16:06:14.473485919 +0000 UTC m=+996.862503442" lastFinishedPulling="2026-01-27 16:06:22.523260742 +0000 UTC m=+1004.912278265" observedRunningTime="2026-01-27 16:06:23.184949392 +0000 UTC m=+1005.573966915" watchObservedRunningTime="2026-01-27 16:06:23.186523978 +0000 UTC m=+1005.575541501" Jan 27 16:06:24 crc kubenswrapper[4767]: I0127 16:06:24.181180 4767 generic.go:334] "Generic (PLEG): container finished" podID="bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d" containerID="8cbd31f81dcf6a25832abf6b49e5b205b8e08af7bd88f262a06f57f5dc29f7e5" exitCode=0 Jan 27 16:06:24 crc kubenswrapper[4767]: I0127 16:06:24.181247 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerDied","Data":"8cbd31f81dcf6a25832abf6b49e5b205b8e08af7bd88f262a06f57f5dc29f7e5"} Jan 27 16:06:24 crc kubenswrapper[4767]: I0127 16:06:24.857885 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:06:24 crc kubenswrapper[4767]: I0127 16:06:24.858250 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:06:25 crc kubenswrapper[4767]: I0127 16:06:25.190852 4767 generic.go:334] "Generic (PLEG): container finished" podID="bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d" containerID="d2286662eed447a166762f88b7788824bcf523ebf91f49fd1615d2fbf12d5e7b" exitCode=0 Jan 27 16:06:25 crc kubenswrapper[4767]: I0127 16:06:25.190901 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerDied","Data":"d2286662eed447a166762f88b7788824bcf523ebf91f49fd1615d2fbf12d5e7b"} Jan 27 16:06:25 crc kubenswrapper[4767]: I0127 16:06:25.213178 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cphkn" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200563 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"a5ea04b00f5dbbc44d5447571ed2931bdeb59e369f67615709aad1d1b3093d7a"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200885 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200896 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"9e0bdc87f00b9169775eab0f8d12b761f78647aba6466a304144a74673980885"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200907 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"75993f88f0a96371c88fce59d6a95fcf65be5a7eb35bac065ae14b41822e455a"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200917 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"7f9ff67e2392ad5de64a12492932f928f2ebdfcb87c3c2eca2f7eefa2b212328"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200926 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"53db07bc2dc85526604cf331220bfb1d2b78cdc5a889465add26570f8231ef2e"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.200935 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k4csc" event={"ID":"bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d","Type":"ContainerStarted","Data":"70948f008d3524c14bc3ada1ecd9e4d5c4355222898b73df397f8355c7f765d4"} Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.224481 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k4csc" podStartSLOduration=5.017631297 podStartE2EDuration="13.224464193s" podCreationTimestamp="2026-01-27 16:06:13 +0000 UTC" firstStartedPulling="2026-01-27 16:06:14.362620027 +0000 UTC m=+996.751637550" lastFinishedPulling="2026-01-27 16:06:22.569452913 +0000 UTC m=+1004.958470446" observedRunningTime="2026-01-27 16:06:26.221297992 +0000 UTC m=+1008.610315515" watchObservedRunningTime="2026-01-27 16:06:26.224464193 +0000 UTC m=+1008.613481716" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.851328 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.853246 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.867371 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.949183 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.949280 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rdt\" (UniqueName: \"kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:26 crc kubenswrapper[4767]: I0127 16:06:26.949361 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.050024 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.050114 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.050135 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52rdt\" (UniqueName: \"kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.050517 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.050737 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.072888 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52rdt\" (UniqueName: \"kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt\") pod \"certified-operators-xvk2m\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.178486 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:27 crc kubenswrapper[4767]: I0127 16:06:27.503166 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:28 crc kubenswrapper[4767]: I0127 16:06:28.218276 4767 generic.go:334] "Generic (PLEG): container finished" podID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerID="1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590" exitCode=0 Jan 27 16:06:28 crc kubenswrapper[4767]: I0127 16:06:28.218419 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerDied","Data":"1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590"} Jan 27 16:06:28 crc kubenswrapper[4767]: I0127 16:06:28.218508 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerStarted","Data":"554dcb502acf1a2a0cce2894275a7d53f3cb508fd40e6b43a39ac44973f2c8e7"} Jan 27 16:06:29 crc kubenswrapper[4767]: I0127 16:06:29.205624 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:29 crc kubenswrapper[4767]: I0127 16:06:29.228424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerStarted","Data":"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7"} Jan 27 16:06:29 crc kubenswrapper[4767]: I0127 16:06:29.257425 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:30 crc kubenswrapper[4767]: I0127 16:06:30.238583 4767 generic.go:334] "Generic (PLEG): container finished" podID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerID="8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7" exitCode=0 Jan 27 16:06:30 crc kubenswrapper[4767]: I0127 16:06:30.238643 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerDied","Data":"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7"} Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.251843 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerStarted","Data":"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355"} Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.277754 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvk2m" podStartSLOduration=2.871929427 podStartE2EDuration="5.277734063s" podCreationTimestamp="2026-01-27 16:06:26 +0000 UTC" firstStartedPulling="2026-01-27 16:06:28.220723848 +0000 UTC m=+1010.609741371" lastFinishedPulling="2026-01-27 16:06:30.626528484 +0000 UTC m=+1013.015546007" observedRunningTime="2026-01-27 16:06:31.272046419 +0000 UTC m=+1013.661063962" watchObservedRunningTime="2026-01-27 16:06:31.277734063 +0000 UTC m=+1013.666751586" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.615826 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.616778 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.618864 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.619089 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-psk6d" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.620440 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.628054 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.714945 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvj6\" (UniqueName: \"kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6\") pod \"openstack-operator-index-vnfhl\" (UID: \"0f3acb03-e177-4372-a36e-250bffeaeb15\") " pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.816473 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvj6\" (UniqueName: \"kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6\") pod \"openstack-operator-index-vnfhl\" (UID: \"0f3acb03-e177-4372-a36e-250bffeaeb15\") " pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.843327 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvj6\" (UniqueName: \"kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6\") pod \"openstack-operator-index-vnfhl\" (UID: \"0f3acb03-e177-4372-a36e-250bffeaeb15\") " pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:31 crc kubenswrapper[4767]: I0127 16:06:31.941061 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:32 crc kubenswrapper[4767]: I0127 16:06:32.378039 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:33 crc kubenswrapper[4767]: I0127 16:06:33.265519 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnfhl" event={"ID":"0f3acb03-e177-4372-a36e-250bffeaeb15","Type":"ContainerStarted","Data":"0a75937a361ca80e5f3d0f86eb3f3f00e4ed4ab0a8b43282225bf3426f3443c5"} Jan 27 16:06:33 crc kubenswrapper[4767]: I0127 16:06:33.738232 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-fs7mv" Jan 27 16:06:34 crc kubenswrapper[4767]: I0127 16:06:34.231265 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6gj67" Jan 27 16:06:36 crc kubenswrapper[4767]: I0127 16:06:36.815760 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.178821 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.179231 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.231320 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.290692 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnfhl" event={"ID":"0f3acb03-e177-4372-a36e-250bffeaeb15","Type":"ContainerStarted","Data":"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e"} Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.290790 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vnfhl" podUID="0f3acb03-e177-4372-a36e-250bffeaeb15" containerName="registry-server" containerID="cri-o://9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e" gracePeriod=2 Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.335856 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.355503 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vnfhl" podStartSLOduration=2.100307077 podStartE2EDuration="6.355480749s" podCreationTimestamp="2026-01-27 16:06:31 +0000 UTC" firstStartedPulling="2026-01-27 16:06:32.384664173 +0000 UTC m=+1014.773681706" lastFinishedPulling="2026-01-27 16:06:36.639837855 +0000 UTC m=+1019.028855378" observedRunningTime="2026-01-27 16:06:37.32215894 +0000 UTC m=+1019.711176463" watchObservedRunningTime="2026-01-27 16:06:37.355480749 +0000 UTC m=+1019.744498272" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.428188 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lmbvl"] Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.430170 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.434374 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lmbvl"] Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.627528 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwz7h\" (UniqueName: \"kubernetes.io/projected/ca30eebc-8930-44d6-8a4b-deea9c5dbe56-kube-api-access-dwz7h\") pod \"openstack-operator-index-lmbvl\" (UID: \"ca30eebc-8930-44d6-8a4b-deea9c5dbe56\") " pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.652944 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.728902 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvvj6\" (UniqueName: \"kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6\") pod \"0f3acb03-e177-4372-a36e-250bffeaeb15\" (UID: \"0f3acb03-e177-4372-a36e-250bffeaeb15\") " Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.729124 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwz7h\" (UniqueName: \"kubernetes.io/projected/ca30eebc-8930-44d6-8a4b-deea9c5dbe56-kube-api-access-dwz7h\") pod \"openstack-operator-index-lmbvl\" (UID: \"ca30eebc-8930-44d6-8a4b-deea9c5dbe56\") " pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.734380 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6" (OuterVolumeSpecName: "kube-api-access-nvvj6") pod "0f3acb03-e177-4372-a36e-250bffeaeb15" (UID: "0f3acb03-e177-4372-a36e-250bffeaeb15"). InnerVolumeSpecName "kube-api-access-nvvj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.745214 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwz7h\" (UniqueName: \"kubernetes.io/projected/ca30eebc-8930-44d6-8a4b-deea9c5dbe56-kube-api-access-dwz7h\") pod \"openstack-operator-index-lmbvl\" (UID: \"ca30eebc-8930-44d6-8a4b-deea9c5dbe56\") " pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.767182 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:37 crc kubenswrapper[4767]: I0127 16:06:37.830356 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvvj6\" (UniqueName: \"kubernetes.io/projected/0f3acb03-e177-4372-a36e-250bffeaeb15-kube-api-access-nvvj6\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.200871 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lmbvl"] Jan 27 16:06:38 crc kubenswrapper[4767]: W0127 16:06:38.208312 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca30eebc_8930_44d6_8a4b_deea9c5dbe56.slice/crio-9c611e27d19efe16f1597c7e81b3adb713f38ecdcc544a5ed1bf47f2cd487582 WatchSource:0}: Error finding container 9c611e27d19efe16f1597c7e81b3adb713f38ecdcc544a5ed1bf47f2cd487582: Status 404 returned error can't find the container with id 9c611e27d19efe16f1597c7e81b3adb713f38ecdcc544a5ed1bf47f2cd487582 Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.299660 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lmbvl" event={"ID":"ca30eebc-8930-44d6-8a4b-deea9c5dbe56","Type":"ContainerStarted","Data":"9c611e27d19efe16f1597c7e81b3adb713f38ecdcc544a5ed1bf47f2cd487582"} Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.302258 4767 generic.go:334] "Generic (PLEG): container finished" podID="0f3acb03-e177-4372-a36e-250bffeaeb15" containerID="9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e" exitCode=0 Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.302329 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vnfhl" Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.302389 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnfhl" event={"ID":"0f3acb03-e177-4372-a36e-250bffeaeb15","Type":"ContainerDied","Data":"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e"} Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.302469 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vnfhl" event={"ID":"0f3acb03-e177-4372-a36e-250bffeaeb15","Type":"ContainerDied","Data":"0a75937a361ca80e5f3d0f86eb3f3f00e4ed4ab0a8b43282225bf3426f3443c5"} Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.302493 4767 scope.go:117] "RemoveContainer" containerID="9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e" Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.344893 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.347276 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vnfhl"] Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.350624 4767 scope.go:117] "RemoveContainer" containerID="9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e" Jan 27 16:06:38 crc kubenswrapper[4767]: E0127 16:06:38.351150 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e\": container with ID starting with 9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e not found: ID does not exist" containerID="9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e" Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.351206 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e"} err="failed to get container status \"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e\": rpc error: code = NotFound desc = could not find container \"9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e\": container with ID starting with 9d817550519da6d4c9336e47f1f28f7e1b86686c95c4fcbaecd4e5b3332ba76e not found: ID does not exist" Jan 27 16:06:38 crc kubenswrapper[4767]: I0127 16:06:38.815237 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.308909 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lmbvl" event={"ID":"ca30eebc-8930-44d6-8a4b-deea9c5dbe56","Type":"ContainerStarted","Data":"71d01c2975acef27dd5210ae4caa97748b267b138a4dd9192a667a4b8aea0144"} Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.310029 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvk2m" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="registry-server" containerID="cri-o://ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355" gracePeriod=2 Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.329507 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lmbvl" podStartSLOduration=2.137366122 podStartE2EDuration="2.329488323s" podCreationTimestamp="2026-01-27 16:06:37 +0000 UTC" firstStartedPulling="2026-01-27 16:06:38.21168725 +0000 UTC m=+1020.600704773" lastFinishedPulling="2026-01-27 16:06:38.403809451 +0000 UTC m=+1020.792826974" observedRunningTime="2026-01-27 16:06:39.325686703 +0000 UTC m=+1021.714704246" watchObservedRunningTime="2026-01-27 16:06:39.329488323 +0000 UTC m=+1021.718505846" Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.787391 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.977253 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content\") pod \"821cadc9-d1b1-4038-877f-f26c8974e7ca\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.977379 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52rdt\" (UniqueName: \"kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt\") pod \"821cadc9-d1b1-4038-877f-f26c8974e7ca\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.977408 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities\") pod \"821cadc9-d1b1-4038-877f-f26c8974e7ca\" (UID: \"821cadc9-d1b1-4038-877f-f26c8974e7ca\") " Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.978514 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities" (OuterVolumeSpecName: "utilities") pod "821cadc9-d1b1-4038-877f-f26c8974e7ca" (UID: "821cadc9-d1b1-4038-877f-f26c8974e7ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:39 crc kubenswrapper[4767]: I0127 16:06:39.984689 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt" (OuterVolumeSpecName: "kube-api-access-52rdt") pod "821cadc9-d1b1-4038-877f-f26c8974e7ca" (UID: "821cadc9-d1b1-4038-877f-f26c8974e7ca"). InnerVolumeSpecName "kube-api-access-52rdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.024060 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "821cadc9-d1b1-4038-877f-f26c8974e7ca" (UID: "821cadc9-d1b1-4038-877f-f26c8974e7ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.078672 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.078974 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52rdt\" (UniqueName: \"kubernetes.io/projected/821cadc9-d1b1-4038-877f-f26c8974e7ca-kube-api-access-52rdt\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.079067 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/821cadc9-d1b1-4038-877f-f26c8974e7ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.323020 4767 generic.go:334] "Generic (PLEG): container finished" podID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerID="ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355" exitCode=0 Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.323110 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvk2m" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.323151 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerDied","Data":"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355"} Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.324331 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvk2m" event={"ID":"821cadc9-d1b1-4038-877f-f26c8974e7ca","Type":"ContainerDied","Data":"554dcb502acf1a2a0cce2894275a7d53f3cb508fd40e6b43a39ac44973f2c8e7"} Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.324369 4767 scope.go:117] "RemoveContainer" containerID="ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.333278 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3acb03-e177-4372-a36e-250bffeaeb15" path="/var/lib/kubelet/pods/0f3acb03-e177-4372-a36e-250bffeaeb15/volumes" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.341777 4767 scope.go:117] "RemoveContainer" containerID="8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.365462 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.369838 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvk2m"] Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.370987 4767 scope.go:117] "RemoveContainer" containerID="1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.385229 4767 scope.go:117] "RemoveContainer" containerID="ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355" Jan 27 16:06:40 crc kubenswrapper[4767]: E0127 16:06:40.385684 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355\": container with ID starting with ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355 not found: ID does not exist" containerID="ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.385722 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355"} err="failed to get container status \"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355\": rpc error: code = NotFound desc = could not find container \"ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355\": container with ID starting with ccd4fcb3e27d326622bd274129f891b397afcfc160dcb7b54203be9f17e97355 not found: ID does not exist" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.385751 4767 scope.go:117] "RemoveContainer" containerID="8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7" Jan 27 16:06:40 crc kubenswrapper[4767]: E0127 16:06:40.386241 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7\": container with ID starting with 8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7 not found: ID does not exist" containerID="8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.386265 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7"} err="failed to get container status \"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7\": rpc error: code = NotFound desc = could not find container \"8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7\": container with ID starting with 8003f10fe9dca5cbb65bfa58755b32177b0550a8cdfed1f442149e981253e7f7 not found: ID does not exist" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.386283 4767 scope.go:117] "RemoveContainer" containerID="1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590" Jan 27 16:06:40 crc kubenswrapper[4767]: E0127 16:06:40.386626 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590\": container with ID starting with 1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590 not found: ID does not exist" containerID="1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590" Jan 27 16:06:40 crc kubenswrapper[4767]: I0127 16:06:40.386677 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590"} err="failed to get container status \"1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590\": rpc error: code = NotFound desc = could not find container \"1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590\": container with ID starting with 1b690c4a2570c722505e17fe22d7d032b470cdf6922b9228cca64e8b5f96e590 not found: ID does not exist" Jan 27 16:06:41 crc kubenswrapper[4767]: E0127 16:06:41.158440 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:06:42 crc kubenswrapper[4767]: I0127 16:06:42.333793 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" path="/var/lib/kubelet/pods/821cadc9-d1b1-4038-877f-f26c8974e7ca/volumes" Jan 27 16:06:44 crc kubenswrapper[4767]: I0127 16:06:44.207697 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k4csc" Jan 27 16:06:47 crc kubenswrapper[4767]: I0127 16:06:47.767650 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:47 crc kubenswrapper[4767]: I0127 16:06:47.767966 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:47 crc kubenswrapper[4767]: I0127 16:06:47.797743 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:48 crc kubenswrapper[4767]: I0127 16:06:48.419521 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lmbvl" Jan 27 16:06:51 crc kubenswrapper[4767]: E0127 16:06:51.308194 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:06:54 crc kubenswrapper[4767]: I0127 16:06:54.858384 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:06:54 crc kubenswrapper[4767]: I0127 16:06:54.858872 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:07:01 crc kubenswrapper[4767]: E0127 16:07:01.443655 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.667622 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm"] Jan 27 16:07:01 crc kubenswrapper[4767]: E0127 16:07:01.668298 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668315 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: E0127 16:07:01.668344 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="extract-content" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668354 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="extract-content" Jan 27 16:07:01 crc kubenswrapper[4767]: E0127 16:07:01.668365 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3acb03-e177-4372-a36e-250bffeaeb15" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668374 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3acb03-e177-4372-a36e-250bffeaeb15" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: E0127 16:07:01.668388 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="extract-utilities" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668397 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="extract-utilities" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668550 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3acb03-e177-4372-a36e-250bffeaeb15" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.668567 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="821cadc9-d1b1-4038-877f-f26c8974e7ca" containerName="registry-server" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.669706 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.671701 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9qcbc" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.676894 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm"] Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.770658 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzr75\" (UniqueName: \"kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.771024 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.771168 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.872932 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzr75\" (UniqueName: \"kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.873033 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.873092 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.873603 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.873845 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.895798 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzr75\" (UniqueName: \"kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75\") pod \"5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:01 crc kubenswrapper[4767]: I0127 16:07:01.992469 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:02 crc kubenswrapper[4767]: I0127 16:07:02.442536 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm"] Jan 27 16:07:02 crc kubenswrapper[4767]: I0127 16:07:02.484799 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerStarted","Data":"14d43c05c0a132ae5ca954839e50289e6be314d80ce4c4b0854bb905be9a47b5"} Jan 27 16:07:03 crc kubenswrapper[4767]: I0127 16:07:03.493782 4767 generic.go:334] "Generic (PLEG): container finished" podID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerID="fc25be4ee6d289f4b90683bb44254d82d011eedaf3fb36979e4b4c6322ba3bf2" exitCode=0 Jan 27 16:07:03 crc kubenswrapper[4767]: I0127 16:07:03.494065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerDied","Data":"fc25be4ee6d289f4b90683bb44254d82d011eedaf3fb36979e4b4c6322ba3bf2"} Jan 27 16:07:06 crc kubenswrapper[4767]: I0127 16:07:06.523882 4767 generic.go:334] "Generic (PLEG): container finished" podID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerID="d7d323c3d997877a150f7363ed3bb8bfcb5267d0e8c3a0d7df4bb6f960ca2f4b" exitCode=0 Jan 27 16:07:06 crc kubenswrapper[4767]: I0127 16:07:06.524005 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerDied","Data":"d7d323c3d997877a150f7363ed3bb8bfcb5267d0e8c3a0d7df4bb6f960ca2f4b"} Jan 27 16:07:08 crc kubenswrapper[4767]: I0127 16:07:08.544553 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerStarted","Data":"d3047e7505a4ca966dc827bdd119175728f34f14bc26ac0ea0ca8f10a396366f"} Jan 27 16:07:09 crc kubenswrapper[4767]: I0127 16:07:09.556612 4767 generic.go:334] "Generic (PLEG): container finished" podID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerID="d3047e7505a4ca966dc827bdd119175728f34f14bc26ac0ea0ca8f10a396366f" exitCode=0 Jan 27 16:07:09 crc kubenswrapper[4767]: I0127 16:07:09.556691 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerDied","Data":"d3047e7505a4ca966dc827bdd119175728f34f14bc26ac0ea0ca8f10a396366f"} Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.822536 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.906606 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzr75\" (UniqueName: \"kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75\") pod \"463eb2d8-4b46-4847-af23-df7d867fb2f6\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.906776 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle\") pod \"463eb2d8-4b46-4847-af23-df7d867fb2f6\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.906853 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util\") pod \"463eb2d8-4b46-4847-af23-df7d867fb2f6\" (UID: \"463eb2d8-4b46-4847-af23-df7d867fb2f6\") " Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.907852 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle" (OuterVolumeSpecName: "bundle") pod "463eb2d8-4b46-4847-af23-df7d867fb2f6" (UID: "463eb2d8-4b46-4847-af23-df7d867fb2f6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.912881 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75" (OuterVolumeSpecName: "kube-api-access-gzr75") pod "463eb2d8-4b46-4847-af23-df7d867fb2f6" (UID: "463eb2d8-4b46-4847-af23-df7d867fb2f6"). InnerVolumeSpecName "kube-api-access-gzr75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:07:10 crc kubenswrapper[4767]: I0127 16:07:10.921623 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util" (OuterVolumeSpecName: "util") pod "463eb2d8-4b46-4847-af23-df7d867fb2f6" (UID: "463eb2d8-4b46-4847-af23-df7d867fb2f6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.009346 4767 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.009411 4767 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/463eb2d8-4b46-4847-af23-df7d867fb2f6-util\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.009429 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzr75\" (UniqueName: \"kubernetes.io/projected/463eb2d8-4b46-4847-af23-df7d867fb2f6-kube-api-access-gzr75\") on node \"crc\" DevicePath \"\"" Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.581445 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" event={"ID":"463eb2d8-4b46-4847-af23-df7d867fb2f6","Type":"ContainerDied","Data":"14d43c05c0a132ae5ca954839e50289e6be314d80ce4c4b0854bb905be9a47b5"} Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.581852 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d43c05c0a132ae5ca954839e50289e6be314d80ce4c4b0854bb905be9a47b5" Jan 27 16:07:11 crc kubenswrapper[4767]: I0127 16:07:11.581524 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm" Jan 27 16:07:11 crc kubenswrapper[4767]: E0127 16:07:11.634140 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463eb2d8_4b46_4847_af23_df7d867fb2f6.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.879834 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx"] Jan 27 16:07:18 crc kubenswrapper[4767]: E0127 16:07:18.880559 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="util" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.880570 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="util" Jan 27 16:07:18 crc kubenswrapper[4767]: E0127 16:07:18.880581 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="extract" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.880587 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="extract" Jan 27 16:07:18 crc kubenswrapper[4767]: E0127 16:07:18.880594 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="pull" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.880601 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="pull" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.880712 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="463eb2d8-4b46-4847-af23-df7d867fb2f6" containerName="extract" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.881130 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.900708 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8jjg6" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.923974 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbxv\" (UniqueName: \"kubernetes.io/projected/e436713f-4d09-4773-ac32-f3ea6741be35-kube-api-access-srbxv\") pod \"openstack-operator-controller-init-65bf5cdd75-dqrvx\" (UID: \"e436713f-4d09-4773-ac32-f3ea6741be35\") " pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:18 crc kubenswrapper[4767]: I0127 16:07:18.940553 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx"] Jan 27 16:07:19 crc kubenswrapper[4767]: I0127 16:07:19.025765 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbxv\" (UniqueName: \"kubernetes.io/projected/e436713f-4d09-4773-ac32-f3ea6741be35-kube-api-access-srbxv\") pod \"openstack-operator-controller-init-65bf5cdd75-dqrvx\" (UID: \"e436713f-4d09-4773-ac32-f3ea6741be35\") " pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:19 crc kubenswrapper[4767]: I0127 16:07:19.051181 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbxv\" (UniqueName: \"kubernetes.io/projected/e436713f-4d09-4773-ac32-f3ea6741be35-kube-api-access-srbxv\") pod \"openstack-operator-controller-init-65bf5cdd75-dqrvx\" (UID: \"e436713f-4d09-4773-ac32-f3ea6741be35\") " pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:19 crc kubenswrapper[4767]: I0127 16:07:19.198386 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:19 crc kubenswrapper[4767]: I0127 16:07:19.645510 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx"] Jan 27 16:07:20 crc kubenswrapper[4767]: I0127 16:07:20.654002 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" event={"ID":"e436713f-4d09-4773-ac32-f3ea6741be35","Type":"ContainerStarted","Data":"2b94d0f3942524c25c10a05f930406ea190c2d99c515763dfee5c522cf34d35f"} Jan 27 16:07:21 crc kubenswrapper[4767]: E0127 16:07:21.792003 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:07:23 crc kubenswrapper[4767]: I0127 16:07:23.684067 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" event={"ID":"e436713f-4d09-4773-ac32-f3ea6741be35","Type":"ContainerStarted","Data":"e8242aebe956335cd2e6b99c09ce12107acdb7c640c2c4daf4f697f1befc9b2a"} Jan 27 16:07:23 crc kubenswrapper[4767]: I0127 16:07:23.684681 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:23 crc kubenswrapper[4767]: I0127 16:07:23.722032 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" podStartSLOduration=2.114709076 podStartE2EDuration="5.722009264s" podCreationTimestamp="2026-01-27 16:07:18 +0000 UTC" firstStartedPulling="2026-01-27 16:07:19.64594116 +0000 UTC m=+1062.034958683" lastFinishedPulling="2026-01-27 16:07:23.253241348 +0000 UTC m=+1065.642258871" observedRunningTime="2026-01-27 16:07:23.719391879 +0000 UTC m=+1066.108409402" watchObservedRunningTime="2026-01-27 16:07:23.722009264 +0000 UTC m=+1066.111026797" Jan 27 16:07:24 crc kubenswrapper[4767]: I0127 16:07:24.858132 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:07:24 crc kubenswrapper[4767]: I0127 16:07:24.858226 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:07:24 crc kubenswrapper[4767]: I0127 16:07:24.858268 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:07:24 crc kubenswrapper[4767]: I0127 16:07:24.858842 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:07:24 crc kubenswrapper[4767]: I0127 16:07:24.858915 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96" gracePeriod=600 Jan 27 16:07:25 crc kubenswrapper[4767]: I0127 16:07:25.701055 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96" exitCode=0 Jan 27 16:07:25 crc kubenswrapper[4767]: I0127 16:07:25.701144 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96"} Jan 27 16:07:25 crc kubenswrapper[4767]: I0127 16:07:25.701515 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc"} Jan 27 16:07:25 crc kubenswrapper[4767]: I0127 16:07:25.701543 4767 scope.go:117] "RemoveContainer" containerID="fad0c9cec55858322e531728aa0e6d429308608bc45d2d2ee15b473a2ae6c66a" Jan 27 16:07:29 crc kubenswrapper[4767]: I0127 16:07:29.200752 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65bf5cdd75-dqrvx" Jan 27 16:07:31 crc kubenswrapper[4767]: E0127 16:07:31.957466 4767 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f3acb03_e177_4372_a36e_250bffeaeb15.slice\": RecentStats: unable to find data in memory cache]" Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.975942 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9"] Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.977282 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.981906 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p6272" Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.988110 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275"] Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.989057 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:07:48 crc kubenswrapper[4767]: I0127 16:07:48.990479 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pzd4t" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.003191 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.011531 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.025996 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.026997 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.033871 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-w27rw" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.044919 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.045897 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.048473 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p958n\" (UniqueName: \"kubernetes.io/projected/4b4d49ca-1e76-4d5a-8205-cdb44f6afa01-kube-api-access-p958n\") pod \"barbican-operator-controller-manager-65ff799cfd-2p8b9\" (UID: \"4b4d49ca-1e76-4d5a-8205-cdb44f6afa01\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.048515 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llsdg\" (UniqueName: \"kubernetes.io/projected/d1f0c156-6150-435c-afc4-224f4f72a0e2-kube-api-access-llsdg\") pod \"cinder-operator-controller-manager-655bf9cfbb-nn275\" (UID: \"d1f0c156-6150-435c-afc4-224f4f72a0e2\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.049908 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-x5kb5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.092146 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.105266 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.106253 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.110635 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.114801 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zbvsx" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.117952 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.119484 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.122605 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xzs7b" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.141429 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.148278 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.150853 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzw5\" (UniqueName: \"kubernetes.io/projected/a8181a54-8433-4343-84b5-f32f6f80f0d6-kube-api-access-qvzw5\") pod \"heat-operator-controller-manager-575ffb885b-lwfvm\" (UID: \"a8181a54-8433-4343-84b5-f32f6f80f0d6\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.150905 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58k4s\" (UniqueName: \"kubernetes.io/projected/916825ff-c27d-4760-92bc-4adb7dc12ca2-kube-api-access-58k4s\") pod \"designate-operator-controller-manager-77554cdc5c-zhm6x\" (UID: \"916825ff-c27d-4760-92bc-4adb7dc12ca2\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.150947 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956kk\" (UniqueName: \"kubernetes.io/projected/6a1435fd-9fab-4f48-a588-d8ae2aa1e120-kube-api-access-956kk\") pod \"glance-operator-controller-manager-67dd55ff59-vrwcs\" (UID: \"6a1435fd-9fab-4f48-a588-d8ae2aa1e120\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.151020 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p958n\" (UniqueName: \"kubernetes.io/projected/4b4d49ca-1e76-4d5a-8205-cdb44f6afa01-kube-api-access-p958n\") pod \"barbican-operator-controller-manager-65ff799cfd-2p8b9\" (UID: \"4b4d49ca-1e76-4d5a-8205-cdb44f6afa01\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.151048 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llsdg\" (UniqueName: \"kubernetes.io/projected/d1f0c156-6150-435c-afc4-224f4f72a0e2-kube-api-access-llsdg\") pod \"cinder-operator-controller-manager-655bf9cfbb-nn275\" (UID: \"d1f0c156-6150-435c-afc4-224f4f72a0e2\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.162870 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.163909 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.171136 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.171448 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qpc4v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.202493 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p958n\" (UniqueName: \"kubernetes.io/projected/4b4d49ca-1e76-4d5a-8205-cdb44f6afa01-kube-api-access-p958n\") pod \"barbican-operator-controller-manager-65ff799cfd-2p8b9\" (UID: \"4b4d49ca-1e76-4d5a-8205-cdb44f6afa01\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.209592 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.217617 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llsdg\" (UniqueName: \"kubernetes.io/projected/d1f0c156-6150-435c-afc4-224f4f72a0e2-kube-api-access-llsdg\") pod \"cinder-operator-controller-manager-655bf9cfbb-nn275\" (UID: \"d1f0c156-6150-435c-afc4-224f4f72a0e2\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.227387 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.228427 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.234719 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qkdp6" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.251974 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252048 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252089 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvzw5\" (UniqueName: \"kubernetes.io/projected/a8181a54-8433-4343-84b5-f32f6f80f0d6-kube-api-access-qvzw5\") pod \"heat-operator-controller-manager-575ffb885b-lwfvm\" (UID: \"a8181a54-8433-4343-84b5-f32f6f80f0d6\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252122 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58k4s\" (UniqueName: \"kubernetes.io/projected/916825ff-c27d-4760-92bc-4adb7dc12ca2-kube-api-access-58k4s\") pod \"designate-operator-controller-manager-77554cdc5c-zhm6x\" (UID: \"916825ff-c27d-4760-92bc-4adb7dc12ca2\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252150 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7jm8\" (UniqueName: \"kubernetes.io/projected/e093bca8-5087-47cd-a9af-719248b96d6d-kube-api-access-s7jm8\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252170 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhs7\" (UniqueName: \"kubernetes.io/projected/aa44bde6-467e-42ef-b797-851ee0f87a12-kube-api-access-xlhs7\") pod \"horizon-operator-controller-manager-77d5c5b54f-7vc2f\" (UID: \"aa44bde6-467e-42ef-b797-851ee0f87a12\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.252186 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956kk\" (UniqueName: \"kubernetes.io/projected/6a1435fd-9fab-4f48-a588-d8ae2aa1e120-kube-api-access-956kk\") pod \"glance-operator-controller-manager-67dd55ff59-vrwcs\" (UID: \"6a1435fd-9fab-4f48-a588-d8ae2aa1e120\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.261576 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.262521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.267738 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-c68xg" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.282273 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.283435 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.286464 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-r44jk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.297179 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.304152 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvzw5\" (UniqueName: \"kubernetes.io/projected/a8181a54-8433-4343-84b5-f32f6f80f0d6-kube-api-access-qvzw5\") pod \"heat-operator-controller-manager-575ffb885b-lwfvm\" (UID: \"a8181a54-8433-4343-84b5-f32f6f80f0d6\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.304892 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.307767 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58k4s\" (UniqueName: \"kubernetes.io/projected/916825ff-c27d-4760-92bc-4adb7dc12ca2-kube-api-access-58k4s\") pod \"designate-operator-controller-manager-77554cdc5c-zhm6x\" (UID: \"916825ff-c27d-4760-92bc-4adb7dc12ca2\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.318526 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.319656 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.358696 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.359891 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956kk\" (UniqueName: \"kubernetes.io/projected/6a1435fd-9fab-4f48-a588-d8ae2aa1e120-kube-api-access-956kk\") pod \"glance-operator-controller-manager-67dd55ff59-vrwcs\" (UID: \"6a1435fd-9fab-4f48-a588-d8ae2aa1e120\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.380187 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-jvgzx" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.386760 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7jm8\" (UniqueName: \"kubernetes.io/projected/e093bca8-5087-47cd-a9af-719248b96d6d-kube-api-access-s7jm8\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.386815 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhs7\" (UniqueName: \"kubernetes.io/projected/aa44bde6-467e-42ef-b797-851ee0f87a12-kube-api-access-xlhs7\") pod \"horizon-operator-controller-manager-77d5c5b54f-7vc2f\" (UID: \"aa44bde6-467e-42ef-b797-851ee0f87a12\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.386859 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ztqh\" (UniqueName: \"kubernetes.io/projected/f7123cb1-dbea-42fd-abba-970911e37f5f-kube-api-access-6ztqh\") pod \"keystone-operator-controller-manager-55f684fd56-j7sdn\" (UID: \"f7123cb1-dbea-42fd-abba-970911e37f5f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.386977 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps42q\" (UniqueName: \"kubernetes.io/projected/fe7ca101-b6f4-4733-a896-a9d203cc4bc0-kube-api-access-ps42q\") pod \"ironic-operator-controller-manager-768b776ffb-l8nss\" (UID: \"fe7ca101-b6f4-4733-a896-a9d203cc4bc0\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.387080 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.387132 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lw4r\" (UniqueName: \"kubernetes.io/projected/3ae8a5b5-c9f9-4130-af1e-721617d5c204-kube-api-access-5lw4r\") pod \"manila-operator-controller-manager-849fcfbb6b-xf9r5\" (UID: \"3ae8a5b5-c9f9-4130-af1e-721617d5c204\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.387182 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvgj\" (UniqueName: \"kubernetes.io/projected/4c6965aa-5607-4647-a78f-eb708720424e-kube-api-access-jnvgj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj\" (UID: \"4c6965aa-5607-4647-a78f-eb708720424e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.387343 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.387411 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:07:49.88739132 +0000 UTC m=+1092.276408843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.387879 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.397450 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.419343 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhs7\" (UniqueName: \"kubernetes.io/projected/aa44bde6-467e-42ef-b797-851ee0f87a12-kube-api-access-xlhs7\") pod \"horizon-operator-controller-manager-77d5c5b54f-7vc2f\" (UID: \"aa44bde6-467e-42ef-b797-851ee0f87a12\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.422776 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.425673 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7jm8\" (UniqueName: \"kubernetes.io/projected/e093bca8-5087-47cd-a9af-719248b96d6d-kube-api-access-s7jm8\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.433851 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.440805 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.445846 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.447804 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.448119 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.449439 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kwc79" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.458419 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.468864 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.470078 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.479246 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.482627 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.487117 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.487966 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ztqh\" (UniqueName: \"kubernetes.io/projected/f7123cb1-dbea-42fd-abba-970911e37f5f-kube-api-access-6ztqh\") pod \"keystone-operator-controller-manager-55f684fd56-j7sdn\" (UID: \"f7123cb1-dbea-42fd-abba-970911e37f5f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.488017 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps42q\" (UniqueName: \"kubernetes.io/projected/fe7ca101-b6f4-4733-a896-a9d203cc4bc0-kube-api-access-ps42q\") pod \"ironic-operator-controller-manager-768b776ffb-l8nss\" (UID: \"fe7ca101-b6f4-4733-a896-a9d203cc4bc0\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.488073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lw4r\" (UniqueName: \"kubernetes.io/projected/3ae8a5b5-c9f9-4130-af1e-721617d5c204-kube-api-access-5lw4r\") pod \"manila-operator-controller-manager-849fcfbb6b-xf9r5\" (UID: \"3ae8a5b5-c9f9-4130-af1e-721617d5c204\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.488096 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvgj\" (UniqueName: \"kubernetes.io/projected/4c6965aa-5607-4647-a78f-eb708720424e-kube-api-access-jnvgj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj\" (UID: \"4c6965aa-5607-4647-a78f-eb708720424e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.519449 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rffrv" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.519791 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-b5r5c" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.523685 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lw4r\" (UniqueName: \"kubernetes.io/projected/3ae8a5b5-c9f9-4130-af1e-721617d5c204-kube-api-access-5lw4r\") pod \"manila-operator-controller-manager-849fcfbb6b-xf9r5\" (UID: \"3ae8a5b5-c9f9-4130-af1e-721617d5c204\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.527162 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ztqh\" (UniqueName: \"kubernetes.io/projected/f7123cb1-dbea-42fd-abba-970911e37f5f-kube-api-access-6ztqh\") pod \"keystone-operator-controller-manager-55f684fd56-j7sdn\" (UID: \"f7123cb1-dbea-42fd-abba-970911e37f5f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.530859 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvgj\" (UniqueName: \"kubernetes.io/projected/4c6965aa-5607-4647-a78f-eb708720424e-kube-api-access-jnvgj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj\" (UID: \"4c6965aa-5607-4647-a78f-eb708720424e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.534496 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps42q\" (UniqueName: \"kubernetes.io/projected/fe7ca101-b6f4-4733-a896-a9d203cc4bc0-kube-api-access-ps42q\") pod \"ironic-operator-controller-manager-768b776ffb-l8nss\" (UID: \"fe7ca101-b6f4-4733-a896-a9d203cc4bc0\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.538424 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.557586 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.589413 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.590930 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjd2\" (UniqueName: \"kubernetes.io/projected/61bd36e7-2117-44d2-86e5-62a7d776434e-kube-api-access-rzjd2\") pod \"nova-operator-controller-manager-ddcbfd695-94z5v\" (UID: \"61bd36e7-2117-44d2-86e5-62a7d776434e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.591143 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zstd\" (UniqueName: \"kubernetes.io/projected/a5437c7a-1810-4e6f-9db6-22cc39f0c744-kube-api-access-5zstd\") pod \"octavia-operator-controller-manager-7875d7675-dzhk4\" (UID: \"a5437c7a-1810-4e6f-9db6-22cc39f0c744\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.591319 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh76\" (UniqueName: \"kubernetes.io/projected/9df4e8b9-adcf-4442-a0db-70f45bf9977d-kube-api-access-bsh76\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xfptk\" (UID: \"9df4e8b9-adcf-4442-a0db-70f45bf9977d\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.626767 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.628850 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.635258 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.635472 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cqvfd" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.661305 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.662390 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.668259 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9ntmm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.684074 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.692656 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zstd\" (UniqueName: \"kubernetes.io/projected/a5437c7a-1810-4e6f-9db6-22cc39f0c744-kube-api-access-5zstd\") pod \"octavia-operator-controller-manager-7875d7675-dzhk4\" (UID: \"a5437c7a-1810-4e6f-9db6-22cc39f0c744\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.692720 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsh76\" (UniqueName: \"kubernetes.io/projected/9df4e8b9-adcf-4442-a0db-70f45bf9977d-kube-api-access-bsh76\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xfptk\" (UID: \"9df4e8b9-adcf-4442-a0db-70f45bf9977d\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.692748 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjd2\" (UniqueName: \"kubernetes.io/projected/61bd36e7-2117-44d2-86e5-62a7d776434e-kube-api-access-rzjd2\") pod \"nova-operator-controller-manager-ddcbfd695-94z5v\" (UID: \"61bd36e7-2117-44d2-86e5-62a7d776434e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.719299 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.720975 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.752735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjd2\" (UniqueName: \"kubernetes.io/projected/61bd36e7-2117-44d2-86e5-62a7d776434e-kube-api-access-rzjd2\") pod \"nova-operator-controller-manager-ddcbfd695-94z5v\" (UID: \"61bd36e7-2117-44d2-86e5-62a7d776434e\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.754511 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zstd\" (UniqueName: \"kubernetes.io/projected/a5437c7a-1810-4e6f-9db6-22cc39f0c744-kube-api-access-5zstd\") pod \"octavia-operator-controller-manager-7875d7675-dzhk4\" (UID: \"a5437c7a-1810-4e6f-9db6-22cc39f0c744\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.754712 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsh76\" (UniqueName: \"kubernetes.io/projected/9df4e8b9-adcf-4442-a0db-70f45bf9977d-kube-api-access-bsh76\") pod \"neutron-operator-controller-manager-7ffd8d76d4-xfptk\" (UID: \"9df4e8b9-adcf-4442-a0db-70f45bf9977d\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.771264 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.772303 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.775011 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vtmgm" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.787854 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.793868 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.793993 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdd7j\" (UniqueName: \"kubernetes.io/projected/5a8ec2b4-9702-46de-bcf4-07bc2fe036e1-kube-api-access-wdd7j\") pod \"ovn-operator-controller-manager-6f75f45d54-hpg68\" (UID: \"5a8ec2b4-9702-46de-bcf4-07bc2fe036e1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.794044 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88m2s\" (UniqueName: \"kubernetes.io/projected/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-kube-api-access-88m2s\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.807260 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.808078 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.808156 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.810988 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-m2w2l" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.813568 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.831992 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.832887 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.837610 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.842832 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-plkvw" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.846929 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.853675 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.873333 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.874217 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.878429 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.884558 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hljvf" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.894877 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88m2s\" (UniqueName: \"kubernetes.io/projected/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-kube-api-access-88m2s\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.894936 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt92j\" (UniqueName: \"kubernetes.io/projected/6822744e-5d47-466b-9846-88e9c68a3aeb-kube-api-access-bt92j\") pod \"placement-operator-controller-manager-79d5ccc684-qvnm9\" (UID: \"6822744e-5d47-466b-9846-88e9c68a3aeb\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.894962 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/203038ea-e296-4f7e-8228-015aee5ec061-kube-api-access-8qhdj\") pod \"swift-operator-controller-manager-547cbdb99f-tf9jw\" (UID: \"203038ea-e296-4f7e-8228-015aee5ec061\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.894986 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.895042 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdd7j\" (UniqueName: \"kubernetes.io/projected/5a8ec2b4-9702-46de-bcf4-07bc2fe036e1-kube-api-access-wdd7j\") pod \"ovn-operator-controller-manager-6f75f45d54-hpg68\" (UID: \"5a8ec2b4-9702-46de-bcf4-07bc2fe036e1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.895073 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.895184 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.895245 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:07:50.895230582 +0000 UTC m=+1093.284248105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.895534 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: E0127 16:07:49.895557 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:50.395549941 +0000 UTC m=+1092.784567464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.927892 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88m2s\" (UniqueName: \"kubernetes.io/projected/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-kube-api-access-88m2s\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.928331 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.938985 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdd7j\" (UniqueName: \"kubernetes.io/projected/5a8ec2b4-9702-46de-bcf4-07bc2fe036e1-kube-api-access-wdd7j\") pod \"ovn-operator-controller-manager-6f75f45d54-hpg68\" (UID: \"5a8ec2b4-9702-46de-bcf4-07bc2fe036e1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.949946 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns"] Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.950921 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.957223 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hsshc" Jan 27 16:07:49 crc kubenswrapper[4767]: I0127 16:07:49.972497 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:49.998052 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/203038ea-e296-4f7e-8228-015aee5ec061-kube-api-access-8qhdj\") pod \"swift-operator-controller-manager-547cbdb99f-tf9jw\" (UID: \"203038ea-e296-4f7e-8228-015aee5ec061\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:49.998143 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwhm\" (UniqueName: \"kubernetes.io/projected/666b02fc-6a23-437c-b606-66ba995cd3d6-kube-api-access-5hwhm\") pod \"test-operator-controller-manager-69797bbcbd-gg6cd\" (UID: \"666b02fc-6a23-437c-b606-66ba995cd3d6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:49.998274 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt92j\" (UniqueName: \"kubernetes.io/projected/6822744e-5d47-466b-9846-88e9c68a3aeb-kube-api-access-bt92j\") pod \"placement-operator-controller-manager-79d5ccc684-qvnm9\" (UID: \"6822744e-5d47-466b-9846-88e9c68a3aeb\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:49.998294 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/72144a58-97b0-4150-8198-e9d8f8b0fa7e-kube-api-access-mq5zl\") pod \"telemetry-operator-controller-manager-799bc87c89-4vw6t\" (UID: \"72144a58-97b0-4150-8198-e9d8f8b0fa7e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.032692 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/203038ea-e296-4f7e-8228-015aee5ec061-kube-api-access-8qhdj\") pod \"swift-operator-controller-manager-547cbdb99f-tf9jw\" (UID: \"203038ea-e296-4f7e-8228-015aee5ec061\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.040045 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt92j\" (UniqueName: \"kubernetes.io/projected/6822744e-5d47-466b-9846-88e9c68a3aeb-kube-api-access-bt92j\") pod \"placement-operator-controller-manager-79d5ccc684-qvnm9\" (UID: \"6822744e-5d47-466b-9846-88e9c68a3aeb\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.096943 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.098509 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.101560 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mdhw9" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.101816 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.107978 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/72144a58-97b0-4150-8198-e9d8f8b0fa7e-kube-api-access-mq5zl\") pod \"telemetry-operator-controller-manager-799bc87c89-4vw6t\" (UID: \"72144a58-97b0-4150-8198-e9d8f8b0fa7e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.108337 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hwhm\" (UniqueName: \"kubernetes.io/projected/666b02fc-6a23-437c-b606-66ba995cd3d6-kube-api-access-5hwhm\") pod \"test-operator-controller-manager-69797bbcbd-gg6cd\" (UID: \"666b02fc-6a23-437c-b606-66ba995cd3d6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.108585 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8szl\" (UniqueName: \"kubernetes.io/projected/9d11f824-e923-46e5-958a-f42f9c5504ef-kube-api-access-s8szl\") pod \"watcher-operator-controller-manager-66576874d7-z5wns\" (UID: \"9d11f824-e923-46e5-958a-f42f9c5504ef\") " pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.110642 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.127129 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.127841 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.131915 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq5zl\" (UniqueName: \"kubernetes.io/projected/72144a58-97b0-4150-8198-e9d8f8b0fa7e-kube-api-access-mq5zl\") pod \"telemetry-operator-controller-manager-799bc87c89-4vw6t\" (UID: \"72144a58-97b0-4150-8198-e9d8f8b0fa7e\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.132260 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hwhm\" (UniqueName: \"kubernetes.io/projected/666b02fc-6a23-437c-b606-66ba995cd3d6-kube-api-access-5hwhm\") pod \"test-operator-controller-manager-69797bbcbd-gg6cd\" (UID: \"666b02fc-6a23-437c-b606-66ba995cd3d6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.150066 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.152812 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.155543 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.155711 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-qv5qr" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.174048 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.180846 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.208364 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.209829 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8szl\" (UniqueName: \"kubernetes.io/projected/9d11f824-e923-46e5-958a-f42f9c5504ef-kube-api-access-s8szl\") pod \"watcher-operator-controller-manager-66576874d7-z5wns\" (UID: \"9d11f824-e923-46e5-958a-f42f9c5504ef\") " pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.209925 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.210055 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdtv\" (UniqueName: \"kubernetes.io/projected/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-kube-api-access-pzdtv\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.210100 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.227462 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.237194 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8szl\" (UniqueName: \"kubernetes.io/projected/9d11f824-e923-46e5-958a-f42f9c5504ef-kube-api-access-s8szl\") pod \"watcher-operator-controller-manager-66576874d7-z5wns\" (UID: \"9d11f824-e923-46e5-958a-f42f9c5504ef\") " pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.291646 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.294926 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.304573 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.319449 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.319550 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdtv\" (UniqueName: \"kubernetes.io/projected/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-kube-api-access-pzdtv\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.319638 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.319753 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smclr\" (UniqueName: \"kubernetes.io/projected/057f1cf6-9e40-400e-aaa7-9acd79d01c3d-kube-api-access-smclr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zm4bn\" (UID: \"057f1cf6-9e40-400e-aaa7-9acd79d01c3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.320062 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.320114 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:50.820096354 +0000 UTC m=+1093.209113887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.320489 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.320521 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:50.820510566 +0000 UTC m=+1093.209528089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.342995 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdtv\" (UniqueName: \"kubernetes.io/projected/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-kube-api-access-pzdtv\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.424054 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.424172 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smclr\" (UniqueName: \"kubernetes.io/projected/057f1cf6-9e40-400e-aaa7-9acd79d01c3d-kube-api-access-smclr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zm4bn\" (UID: \"057f1cf6-9e40-400e-aaa7-9acd79d01c3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.425032 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.425091 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:51.425073707 +0000 UTC m=+1093.814091230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.458633 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smclr\" (UniqueName: \"kubernetes.io/projected/057f1cf6-9e40-400e-aaa7-9acd79d01c3d-kube-api-access-smclr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zm4bn\" (UID: \"057f1cf6-9e40-400e-aaa7-9acd79d01c3d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.463648 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.626168 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.832979 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.833153 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.833296 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.833387 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:51.833365002 +0000 UTC m=+1094.222382575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.834109 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.834147 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:51.834135464 +0000 UTC m=+1094.223153087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.862552 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x"] Jan 27 16:07:50 crc kubenswrapper[4767]: W0127 16:07:50.877906 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a1435fd_9fab_4f48_a588_d8ae2aa1e120.slice/crio-84c8c00de4cefd86e1f97ef6eeba2e6c8d0674e76753cd47e30bb7b2d3658983 WatchSource:0}: Error finding container 84c8c00de4cefd86e1f97ef6eeba2e6c8d0674e76753cd47e30bb7b2d3658983: Status 404 returned error can't find the container with id 84c8c00de4cefd86e1f97ef6eeba2e6c8d0674e76753cd47e30bb7b2d3658983 Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.885745 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.898191 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" event={"ID":"916825ff-c27d-4760-92bc-4adb7dc12ca2","Type":"ContainerStarted","Data":"506e5b024d2cac5c986f9e48d990bd350ffed803a459f90bd846a51973755405"} Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.907247 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" event={"ID":"6a1435fd-9fab-4f48-a588-d8ae2aa1e120","Type":"ContainerStarted","Data":"84c8c00de4cefd86e1f97ef6eeba2e6c8d0674e76753cd47e30bb7b2d3658983"} Jan 27 16:07:50 crc kubenswrapper[4767]: W0127 16:07:50.911047 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa44bde6_467e_42ef_b797_851ee0f87a12.slice/crio-c83ecf27a21823954cac0f3ea0786540b0de1896f9fb141433f74d57a9cabdd6 WatchSource:0}: Error finding container c83ecf27a21823954cac0f3ea0786540b0de1896f9fb141433f74d57a9cabdd6: Status 404 returned error can't find the container with id c83ecf27a21823954cac0f3ea0786540b0de1896f9fb141433f74d57a9cabdd6 Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.913552 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" event={"ID":"d1f0c156-6150-435c-afc4-224f4f72a0e2","Type":"ContainerStarted","Data":"b91a66d61f6f7c6955b30579f89bb2e2474ed577bec8d33256b318b49612f0eb"} Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.913606 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f"] Jan 27 16:07:50 crc kubenswrapper[4767]: W0127 16:07:50.914298 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe7ca101_b6f4_4733_a896_a9d203cc4bc0.slice/crio-707b92a08f4db47c63763955756bbc59f31005d2ffe9ab9d3dcaf6b9b0c4146a WatchSource:0}: Error finding container 707b92a08f4db47c63763955756bbc59f31005d2ffe9ab9d3dcaf6b9b0c4146a: Status 404 returned error can't find the container with id 707b92a08f4db47c63763955756bbc59f31005d2ffe9ab9d3dcaf6b9b0c4146a Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.915576 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" event={"ID":"4b4d49ca-1e76-4d5a-8205-cdb44f6afa01","Type":"ContainerStarted","Data":"50997a4c35be91e336cae2e4e993deb75fc0be9b05d652eac138445751558c5e"} Jan 27 16:07:50 crc kubenswrapper[4767]: W0127 16:07:50.927018 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8181a54_8433_4343_84b5_f32f6f80f0d6.slice/crio-2500fad17dbc67352ccebff772e3f5e0f86a9afdbe0910fe91e107633f10a7d4 WatchSource:0}: Error finding container 2500fad17dbc67352ccebff772e3f5e0f86a9afdbe0910fe91e107633f10a7d4: Status 404 returned error can't find the container with id 2500fad17dbc67352ccebff772e3f5e0f86a9afdbe0910fe91e107633f10a7d4 Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.935022 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.935418 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: E0127 16:07:50.935470 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:07:52.935452241 +0000 UTC m=+1095.324469764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.939875 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.957456 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.964246 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj"] Jan 27 16:07:50 crc kubenswrapper[4767]: I0127 16:07:50.969123 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.349727 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.371692 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.381299 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.394265 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.419145 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd"] Jan 27 16:07:51 crc kubenswrapper[4767]: W0127 16:07:51.424012 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod203038ea_e296_4f7e_8228_015aee5ec061.slice/crio-26745ef0fba4884fb919776a48376cc72913d13eacb644d04d5cd663858a307f WatchSource:0}: Error finding container 26745ef0fba4884fb919776a48376cc72913d13eacb644d04d5cd663858a307f: Status 404 returned error can't find the container with id 26745ef0fba4884fb919776a48376cc72913d13eacb644d04d5cd663858a307f Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.431712 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9"] Jan 27 16:07:51 crc kubenswrapper[4767]: W0127 16:07:51.435651 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a8ec2b4_9702_46de_bcf4_07bc2fe036e1.slice/crio-21d885a6e3b5dd506df0bb983dd134c2c780feef882815f56795716f3e205109 WatchSource:0}: Error finding container 21d885a6e3b5dd506df0bb983dd134c2c780feef882815f56795716f3e205109: Status 404 returned error can't find the container with id 21d885a6e3b5dd506df0bb983dd134c2c780feef882815f56795716f3e205109 Jan 27 16:07:51 crc kubenswrapper[4767]: W0127 16:07:51.442403 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61bd36e7_2117_44d2_86e5_62a7d776434e.slice/crio-3db1e7761f7835adfd6d3e9e87a1f7bdabb29352e1d972c7de471a05d96d8e8e WatchSource:0}: Error finding container 3db1e7761f7835adfd6d3e9e87a1f7bdabb29352e1d972c7de471a05d96d8e8e: Status 404 returned error can't find the container with id 3db1e7761f7835adfd6d3e9e87a1f7bdabb29352e1d972c7de471a05d96d8e8e Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.445466 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.445579 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.445617 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:53.445604649 +0000 UTC m=+1095.834622162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.449103 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.460732 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk"] Jan 27 16:07:51 crc kubenswrapper[4767]: W0127 16:07:51.462812 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5437c7a_1810_4e6f_9db6_22cc39f0c744.slice/crio-2debc19dbe69812d9b023e6906e88a9c16879fd1a8e01c77b1a073056731d0b2 WatchSource:0}: Error finding container 2debc19dbe69812d9b023e6906e88a9c16879fd1a8e01c77b1a073056731d0b2: Status 404 returned error can't find the container with id 2debc19dbe69812d9b023e6906e88a9c16879fd1a8e01c77b1a073056731d0b2 Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.465094 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.154:5001/openstack-k8s-operators/watcher-operator:ec9703c7d016457c3fb11352bf5c1bd36a8b39e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8szl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-66576874d7-z5wns_openstack-operators(9d11f824-e923-46e5-958a-f42f9c5504ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.465218 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smclr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zm4bn_openstack-operators(057f1cf6-9e40-400e-aaa7-9acd79d01c3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.465401 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lw4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-xf9r5_openstack-operators(3ae8a5b5-c9f9-4130-af1e-721617d5c204): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.465927 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bsh76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-xfptk_openstack-operators(9df4e8b9-adcf-4442-a0db-70f45bf9977d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.466313 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" podUID="057f1cf6-9e40-400e-aaa7-9acd79d01c3d" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.466332 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" podUID="9d11f824-e923-46e5-958a-f42f9c5504ef" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.466403 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zstd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-dzhk4_openstack-operators(a5437c7a-1810-4e6f-9db6-22cc39f0c744): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.466492 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" podUID="3ae8a5b5-c9f9-4130-af1e-721617d5c204" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.467067 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" podUID="9df4e8b9-adcf-4442-a0db-70f45bf9977d" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.467656 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" podUID="a5437c7a-1810-4e6f-9db6-22cc39f0c744" Jan 27 16:07:51 crc kubenswrapper[4767]: W0127 16:07:51.468520 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6822744e_5d47_466b_9846_88e9c68a3aeb.slice/crio-f5f311c93658eb4c570a94b86496fa1ed352028077fbca60636da388804bb228 WatchSource:0}: Error finding container f5f311c93658eb4c570a94b86496fa1ed352028077fbca60636da388804bb228: Status 404 returned error can't find the container with id f5f311c93658eb4c570a94b86496fa1ed352028077fbca60636da388804bb228 Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.471973 4767 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bt92j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-qvnm9_openstack-operators(6822744e-5d47-466b-9846-88e9c68a3aeb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.473377 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" podUID="6822744e-5d47-466b-9846-88e9c68a3aeb" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.475233 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.483336 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.489188 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t"] Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.853801 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.853858 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.853971 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.854025 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.854099 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:53.854049879 +0000 UTC m=+1096.243067402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.854118 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:53.854111561 +0000 UTC m=+1096.243129084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.924270 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" event={"ID":"203038ea-e296-4f7e-8228-015aee5ec061","Type":"ContainerStarted","Data":"26745ef0fba4884fb919776a48376cc72913d13eacb644d04d5cd663858a307f"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.926790 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" event={"ID":"6822744e-5d47-466b-9846-88e9c68a3aeb","Type":"ContainerStarted","Data":"f5f311c93658eb4c570a94b86496fa1ed352028077fbca60636da388804bb228"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.928754 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" event={"ID":"5a8ec2b4-9702-46de-bcf4-07bc2fe036e1","Type":"ContainerStarted","Data":"21d885a6e3b5dd506df0bb983dd134c2c780feef882815f56795716f3e205109"} Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.929274 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" podUID="6822744e-5d47-466b-9846-88e9c68a3aeb" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.939296 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" event={"ID":"a5437c7a-1810-4e6f-9db6-22cc39f0c744","Type":"ContainerStarted","Data":"2debc19dbe69812d9b023e6906e88a9c16879fd1a8e01c77b1a073056731d0b2"} Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.941944 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" podUID="a5437c7a-1810-4e6f-9db6-22cc39f0c744" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.945888 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" event={"ID":"057f1cf6-9e40-400e-aaa7-9acd79d01c3d","Type":"ContainerStarted","Data":"eaaa0764d75d1450740ddbf74cf4097b3bc8d4f469ed6131990c3b7a1745e42f"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.949255 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" event={"ID":"f7123cb1-dbea-42fd-abba-970911e37f5f","Type":"ContainerStarted","Data":"68e0932ba5c04b7ab56df7d2b88502c392903d6f157aae15467a65f2523e9449"} Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.950817 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" podUID="057f1cf6-9e40-400e-aaa7-9acd79d01c3d" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.951424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" event={"ID":"a8181a54-8433-4343-84b5-f32f6f80f0d6","Type":"ContainerStarted","Data":"2500fad17dbc67352ccebff772e3f5e0f86a9afdbe0910fe91e107633f10a7d4"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.953038 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" event={"ID":"fe7ca101-b6f4-4733-a896-a9d203cc4bc0","Type":"ContainerStarted","Data":"707b92a08f4db47c63763955756bbc59f31005d2ffe9ab9d3dcaf6b9b0c4146a"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.955333 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" event={"ID":"666b02fc-6a23-437c-b606-66ba995cd3d6","Type":"ContainerStarted","Data":"e9c01e12de8ee595b4d0a28be387e5fa0ebf7ba1d9529b9389ddc7d852108b30"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.957044 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" event={"ID":"72144a58-97b0-4150-8198-e9d8f8b0fa7e","Type":"ContainerStarted","Data":"bc002e49fcd922a143271c8e052a291c52b23a8cfd941fd2af87041c482ec93c"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.989680 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" event={"ID":"9df4e8b9-adcf-4442-a0db-70f45bf9977d","Type":"ContainerStarted","Data":"0c5c9059e04e10028803e5a4d187e2d3e5948023fdd77485ade093cdf405ad11"} Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.990983 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" podUID="9df4e8b9-adcf-4442-a0db-70f45bf9977d" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.993317 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" event={"ID":"3ae8a5b5-c9f9-4130-af1e-721617d5c204","Type":"ContainerStarted","Data":"33dde61a861e1b52e3e2e298f1d304ce8b1855aae64b2062809fb624fec618f0"} Jan 27 16:07:51 crc kubenswrapper[4767]: E0127 16:07:51.994973 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" podUID="3ae8a5b5-c9f9-4130-af1e-721617d5c204" Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.996047 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" event={"ID":"aa44bde6-467e-42ef-b797-851ee0f87a12","Type":"ContainerStarted","Data":"c83ecf27a21823954cac0f3ea0786540b0de1896f9fb141433f74d57a9cabdd6"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.997691 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" event={"ID":"4c6965aa-5607-4647-a78f-eb708720424e","Type":"ContainerStarted","Data":"9cab86f53ca38ba5a8f39b2ea12669e04a3281feaa015e526ae95b155484397d"} Jan 27 16:07:51 crc kubenswrapper[4767]: I0127 16:07:51.999074 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" event={"ID":"61bd36e7-2117-44d2-86e5-62a7d776434e","Type":"ContainerStarted","Data":"3db1e7761f7835adfd6d3e9e87a1f7bdabb29352e1d972c7de471a05d96d8e8e"} Jan 27 16:07:52 crc kubenswrapper[4767]: I0127 16:07:52.017601 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" event={"ID":"9d11f824-e923-46e5-958a-f42f9c5504ef","Type":"ContainerStarted","Data":"5c3fd0b0eff732d28e30115f2470b0629998f612a345dd647592ec98daec0e1a"} Jan 27 16:07:52 crc kubenswrapper[4767]: E0127 16:07:52.020236 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.154:5001/openstack-k8s-operators/watcher-operator:ec9703c7d016457c3fb11352bf5c1bd36a8b39e6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" podUID="9d11f824-e923-46e5-958a-f42f9c5504ef" Jan 27 16:07:52 crc kubenswrapper[4767]: I0127 16:07:52.969038 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:52 crc kubenswrapper[4767]: E0127 16:07:52.969368 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:52 crc kubenswrapper[4767]: E0127 16:07:52.969443 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:07:56.969425271 +0000 UTC m=+1099.358442794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.027376 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" podUID="a5437c7a-1810-4e6f-9db6-22cc39f0c744" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.030050 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" podUID="057f1cf6-9e40-400e-aaa7-9acd79d01c3d" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.030104 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" podUID="6822744e-5d47-466b-9846-88e9c68a3aeb" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.030141 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.154:5001/openstack-k8s-operators/watcher-operator:ec9703c7d016457c3fb11352bf5c1bd36a8b39e6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" podUID="9d11f824-e923-46e5-958a-f42f9c5504ef" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.030174 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" podUID="9df4e8b9-adcf-4442-a0db-70f45bf9977d" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.030225 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" podUID="3ae8a5b5-c9f9-4130-af1e-721617d5c204" Jan 27 16:07:53 crc kubenswrapper[4767]: I0127 16:07:53.476772 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.477059 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.477162 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:57.477139469 +0000 UTC m=+1099.866157032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: I0127 16:07:53.882609 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:53 crc kubenswrapper[4767]: I0127 16:07:53.882890 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.883008 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.883066 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.883093 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:57.883074447 +0000 UTC m=+1100.272091960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:07:53 crc kubenswrapper[4767]: E0127 16:07:53.883130 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:07:57.883115538 +0000 UTC m=+1100.272133061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: I0127 16:07:57.031719 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.031907 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.032245 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:08:05.032222834 +0000 UTC m=+1107.421240357 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: I0127 16:07:57.536898 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.537050 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.537111 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:05.53709599 +0000 UTC m=+1107.926113513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: I0127 16:07:57.941217 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:57 crc kubenswrapper[4767]: I0127 16:07:57.941319 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.941524 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.941522 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.941585 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:05.941570436 +0000 UTC m=+1108.330587959 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:07:57 crc kubenswrapper[4767]: E0127 16:07:57.941619 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:05.941592876 +0000 UTC m=+1108.330610429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.097334 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" event={"ID":"666b02fc-6a23-437c-b606-66ba995cd3d6","Type":"ContainerStarted","Data":"95d6d015989b1a0323aff5289da8bf3985a465c2a30eaa1c82864d9623776880"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.097887 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.099224 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" event={"ID":"f7123cb1-dbea-42fd-abba-970911e37f5f","Type":"ContainerStarted","Data":"a16670e1216ee1701424974f05791d1a0a132610d2b61381b12482e022b256ac"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.099366 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.100438 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" event={"ID":"a8181a54-8433-4343-84b5-f32f6f80f0d6","Type":"ContainerStarted","Data":"cef2b62ba4067096cd6a7502076d8acefc37803cdbb109554fd0164b0b10ef70"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.100575 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.101679 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" event={"ID":"d1f0c156-6150-435c-afc4-224f4f72a0e2","Type":"ContainerStarted","Data":"472bf6908410803c1b0c907e73f90698f761549410b2b20e75c770fc505b2dc1"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.102140 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.103374 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" event={"ID":"4b4d49ca-1e76-4d5a-8205-cdb44f6afa01","Type":"ContainerStarted","Data":"18834b3fbd8815ea846b577a89d6fc4f5b8b9925ca42328cdea7314ebf2fc71d"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.103773 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.105462 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" event={"ID":"6a1435fd-9fab-4f48-a588-d8ae2aa1e120","Type":"ContainerStarted","Data":"f40064439457d3e259364cf66c4f76c0624e77d6adf986f471f32aebcb876a59"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.105535 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.107186 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" event={"ID":"203038ea-e296-4f7e-8228-015aee5ec061","Type":"ContainerStarted","Data":"c2d47af16239d56238c9755072f79a3eeab883b779fbf59d28354a502cc3b135"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.107258 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.108502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" event={"ID":"aa44bde6-467e-42ef-b797-851ee0f87a12","Type":"ContainerStarted","Data":"83b956dfcf0af739f86966c6b593ad538e8d39ac590d9bc16729dc351cb5ac4b"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.108642 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.109761 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" event={"ID":"5a8ec2b4-9702-46de-bcf4-07bc2fe036e1","Type":"ContainerStarted","Data":"5be6da622963aadbe7e6496cfd92ffb957e535276a1daaa7bf0e59ce107c1027"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.109825 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.111251 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" event={"ID":"61bd36e7-2117-44d2-86e5-62a7d776434e","Type":"ContainerStarted","Data":"c1021cd046b28ee5ea45dd470e7883151895948b91702c04999e6473661f3751"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.111322 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.113080 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" event={"ID":"fe7ca101-b6f4-4733-a896-a9d203cc4bc0","Type":"ContainerStarted","Data":"2e46238b38fa022b9664086a93c0d24a1cec482d647b702baf1e2697fc2c93ec"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.113160 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.114618 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" event={"ID":"916825ff-c27d-4760-92bc-4adb7dc12ca2","Type":"ContainerStarted","Data":"baed3798b5d39588e099febfb6967c61ed5bcd4103cf843b995b8c5ef807fb11"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.114984 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.116329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" event={"ID":"72144a58-97b0-4150-8198-e9d8f8b0fa7e","Type":"ContainerStarted","Data":"23f4a7993a4d812217549237cfef603021160e12c16d5b5b46df9feed568b9bf"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.116650 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.117875 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" event={"ID":"4c6965aa-5607-4647-a78f-eb708720424e","Type":"ContainerStarted","Data":"b7384f49a5d99167dd330e53a45aa965b69c1c0eace1ea621104b9d31c544751"} Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.118269 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.169788 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" podStartSLOduration=3.847976199 podStartE2EDuration="14.169761848s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.456152233 +0000 UTC m=+1093.845169756" lastFinishedPulling="2026-01-27 16:08:01.777937852 +0000 UTC m=+1104.166955405" observedRunningTime="2026-01-27 16:08:03.131302167 +0000 UTC m=+1105.520319690" watchObservedRunningTime="2026-01-27 16:08:03.169761848 +0000 UTC m=+1105.558779381" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.210681 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" podStartSLOduration=3.311828437 podStartE2EDuration="14.210661749s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.881000724 +0000 UTC m=+1093.270018247" lastFinishedPulling="2026-01-27 16:08:01.779834036 +0000 UTC m=+1104.168851559" observedRunningTime="2026-01-27 16:08:03.209909087 +0000 UTC m=+1105.598926610" watchObservedRunningTime="2026-01-27 16:08:03.210661749 +0000 UTC m=+1105.599679272" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.216466 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" podStartSLOduration=3.329500531 podStartE2EDuration="14.216451175s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.928348187 +0000 UTC m=+1093.317365710" lastFinishedPulling="2026-01-27 16:08:01.815298831 +0000 UTC m=+1104.204316354" observedRunningTime="2026-01-27 16:08:03.179940599 +0000 UTC m=+1105.568958132" watchObservedRunningTime="2026-01-27 16:08:03.216451175 +0000 UTC m=+1105.605468708" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.241083 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" podStartSLOduration=3.829067717 podStartE2EDuration="14.241063719s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.453814456 +0000 UTC m=+1093.842831979" lastFinishedPulling="2026-01-27 16:08:01.865810458 +0000 UTC m=+1104.254827981" observedRunningTime="2026-01-27 16:08:03.239869865 +0000 UTC m=+1105.628887378" watchObservedRunningTime="2026-01-27 16:08:03.241063719 +0000 UTC m=+1105.630081242" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.291660 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" podStartSLOduration=3.805255708 podStartE2EDuration="15.291643197s" podCreationTimestamp="2026-01-27 16:07:48 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.291350757 +0000 UTC m=+1092.680368280" lastFinishedPulling="2026-01-27 16:08:01.777738236 +0000 UTC m=+1104.166755769" observedRunningTime="2026-01-27 16:08:03.290428023 +0000 UTC m=+1105.679445556" watchObservedRunningTime="2026-01-27 16:08:03.291643197 +0000 UTC m=+1105.680660720" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.324079 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" podStartSLOduration=3.481728869 podStartE2EDuration="14.324063136s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.935244425 +0000 UTC m=+1093.324261948" lastFinishedPulling="2026-01-27 16:08:01.777578682 +0000 UTC m=+1104.166596215" observedRunningTime="2026-01-27 16:08:03.319033812 +0000 UTC m=+1105.708051335" watchObservedRunningTime="2026-01-27 16:08:03.324063136 +0000 UTC m=+1105.713080659" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.340598 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" podStartSLOduration=3.991680539 podStartE2EDuration="14.340581948s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.435849918 +0000 UTC m=+1093.824867441" lastFinishedPulling="2026-01-27 16:08:01.784751327 +0000 UTC m=+1104.173768850" observedRunningTime="2026-01-27 16:08:03.339023264 +0000 UTC m=+1105.728040787" watchObservedRunningTime="2026-01-27 16:08:03.340581948 +0000 UTC m=+1105.729599471" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.364012 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" podStartSLOduration=4.032470859 podStartE2EDuration="14.363998939s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.442370166 +0000 UTC m=+1093.831387699" lastFinishedPulling="2026-01-27 16:08:01.773898256 +0000 UTC m=+1104.162915779" observedRunningTime="2026-01-27 16:08:03.358040588 +0000 UTC m=+1105.747058111" watchObservedRunningTime="2026-01-27 16:08:03.363998939 +0000 UTC m=+1105.753016462" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.400117 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" podStartSLOduration=4.509359131 podStartE2EDuration="15.400103671s" podCreationTimestamp="2026-01-27 16:07:48 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.887605204 +0000 UTC m=+1093.276622727" lastFinishedPulling="2026-01-27 16:08:01.778349744 +0000 UTC m=+1104.167367267" observedRunningTime="2026-01-27 16:08:03.397416305 +0000 UTC m=+1105.786433828" watchObservedRunningTime="2026-01-27 16:08:03.400103671 +0000 UTC m=+1105.789121194" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.448925 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" podStartSLOduration=3.604948655 podStartE2EDuration="14.448864437s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.93366417 +0000 UTC m=+1093.322681693" lastFinishedPulling="2026-01-27 16:08:01.777579952 +0000 UTC m=+1104.166597475" observedRunningTime="2026-01-27 16:08:03.420493025 +0000 UTC m=+1105.809510538" watchObservedRunningTime="2026-01-27 16:08:03.448864437 +0000 UTC m=+1105.837881960" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.483285 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" podStartSLOduration=3.589509883 podStartE2EDuration="14.483253922s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.931911619 +0000 UTC m=+1093.320929142" lastFinishedPulling="2026-01-27 16:08:01.825655658 +0000 UTC m=+1104.214673181" observedRunningTime="2026-01-27 16:08:03.45070941 +0000 UTC m=+1105.839726933" watchObservedRunningTime="2026-01-27 16:08:03.483253922 +0000 UTC m=+1105.872271445" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.497396 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" podStartSLOduration=3.6392640849999998 podStartE2EDuration="14.497378306s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.916790374 +0000 UTC m=+1093.305807897" lastFinishedPulling="2026-01-27 16:08:01.774904595 +0000 UTC m=+1104.163922118" observedRunningTime="2026-01-27 16:08:03.475301954 +0000 UTC m=+1105.864319477" watchObservedRunningTime="2026-01-27 16:08:03.497378306 +0000 UTC m=+1105.886395829" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.507625 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" podStartSLOduration=4.370823625 podStartE2EDuration="15.507602789s" podCreationTimestamp="2026-01-27 16:07:48 +0000 UTC" firstStartedPulling="2026-01-27 16:07:50.641027704 +0000 UTC m=+1093.030045227" lastFinishedPulling="2026-01-27 16:08:01.777806868 +0000 UTC m=+1104.166824391" observedRunningTime="2026-01-27 16:08:03.494601827 +0000 UTC m=+1105.883619340" watchObservedRunningTime="2026-01-27 16:08:03.507602789 +0000 UTC m=+1105.896620312" Jan 27 16:08:03 crc kubenswrapper[4767]: I0127 16:08:03.526446 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" podStartSLOduration=4.201739005 podStartE2EDuration="14.526422208s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.453594069 +0000 UTC m=+1093.842611592" lastFinishedPulling="2026-01-27 16:08:01.778277272 +0000 UTC m=+1104.167294795" observedRunningTime="2026-01-27 16:08:03.513713624 +0000 UTC m=+1105.902731147" watchObservedRunningTime="2026-01-27 16:08:03.526422208 +0000 UTC m=+1105.915439731" Jan 27 16:08:05 crc kubenswrapper[4767]: I0127 16:08:05.064957 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.065243 4767 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.065480 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert podName:e093bca8-5087-47cd-a9af-719248b96d6d nodeName:}" failed. No retries permitted until 2026-01-27 16:08:21.065435158 +0000 UTC m=+1123.454452861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert") pod "infra-operator-controller-manager-7d75bc88d5-8ksrz" (UID: "e093bca8-5087-47cd-a9af-719248b96d6d") : secret "infra-operator-webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: I0127 16:08:05.571834 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.572006 4767 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.572139 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert podName:355ddf1c-4f8e-45ee-8f68-af3d0b4feb51 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:21.572112913 +0000 UTC m=+1123.961130526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" (UID: "355ddf1c-4f8e-45ee-8f68-af3d0b4feb51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: I0127 16:08:05.976601 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:05 crc kubenswrapper[4767]: I0127 16:08:05.976931 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.977041 4767 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.977084 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:21.977071287 +0000 UTC m=+1124.366088810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "webhook-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.977122 4767 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 16:08:05 crc kubenswrapper[4767]: E0127 16:08:05.977139 4767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs podName:8eade9eb-ffdd-43c3-b9ac-5522bc2218b8 nodeName:}" failed. No retries permitted until 2026-01-27 16:08:21.977133419 +0000 UTC m=+1124.366150942 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs") pod "openstack-operator-controller-manager-79b75b7c86-j2gvn" (UID: "8eade9eb-ffdd-43c3-b9ac-5522bc2218b8") : secret "metrics-server-cert" not found Jan 27 16:08:06 crc kubenswrapper[4767]: I0127 16:08:06.145340 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" event={"ID":"a5437c7a-1810-4e6f-9db6-22cc39f0c744","Type":"ContainerStarted","Data":"b34be94226fd5c5bf8293d868eddc37ee94308632580661417f08fa04d018414"} Jan 27 16:08:06 crc kubenswrapper[4767]: I0127 16:08:06.146262 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:08:06 crc kubenswrapper[4767]: I0127 16:08:06.147576 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" event={"ID":"9d11f824-e923-46e5-958a-f42f9c5504ef","Type":"ContainerStarted","Data":"189fbb2dbf3ff0910b3128cb20232306e5dc3b2c25c703d8a8275b1bf1bdf0ba"} Jan 27 16:08:06 crc kubenswrapper[4767]: I0127 16:08:06.147984 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:08:06 crc kubenswrapper[4767]: I0127 16:08:06.190747 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" podStartSLOduration=2.9974773519999998 podStartE2EDuration="17.190728754s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.466274174 +0000 UTC m=+1093.855291687" lastFinishedPulling="2026-01-27 16:08:05.659525566 +0000 UTC m=+1108.048543089" observedRunningTime="2026-01-27 16:08:06.164782461 +0000 UTC m=+1108.553799984" watchObservedRunningTime="2026-01-27 16:08:06.190728754 +0000 UTC m=+1108.579746277" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.178469 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" event={"ID":"057f1cf6-9e40-400e-aaa7-9acd79d01c3d","Type":"ContainerStarted","Data":"b86fb0dd4bd222e9a9d7cead755ed91249e7ecb485819975e5d9b44699142619"} Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.180583 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" event={"ID":"3ae8a5b5-c9f9-4130-af1e-721617d5c204","Type":"ContainerStarted","Data":"633500d777d76b8896bc1fdc969be777dec5b65b2be1ab517643fd54dde54969"} Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.180756 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.209638 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zm4bn" podStartSLOduration=2.978024495 podStartE2EDuration="20.209587779s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.465150112 +0000 UTC m=+1093.854167635" lastFinishedPulling="2026-01-27 16:08:08.696713396 +0000 UTC m=+1111.085730919" observedRunningTime="2026-01-27 16:08:09.195762314 +0000 UTC m=+1111.584779837" watchObservedRunningTime="2026-01-27 16:08:09.209587779 +0000 UTC m=+1111.598605322" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.211784 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" podStartSLOduration=5.978624388 podStartE2EDuration="20.211771302s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.464976197 +0000 UTC m=+1093.853993720" lastFinishedPulling="2026-01-27 16:08:05.698123121 +0000 UTC m=+1108.087140634" observedRunningTime="2026-01-27 16:08:06.18852415 +0000 UTC m=+1108.577541673" watchObservedRunningTime="2026-01-27 16:08:09.211771302 +0000 UTC m=+1111.600788825" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.224278 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" podStartSLOduration=2.970773687 podStartE2EDuration="20.224257229s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.465322767 +0000 UTC m=+1093.854340290" lastFinishedPulling="2026-01-27 16:08:08.718806319 +0000 UTC m=+1111.107823832" observedRunningTime="2026-01-27 16:08:09.214979394 +0000 UTC m=+1111.603996937" watchObservedRunningTime="2026-01-27 16:08:09.224257229 +0000 UTC m=+1111.613274752" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.301661 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-2p8b9" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.312748 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-nn275" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.370513 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-zhm6x" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.395926 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-vrwcs" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.438260 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-lwfvm" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.465775 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-7vc2f" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.565738 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-l8nss" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.597171 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-j7sdn" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.724506 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj" Jan 27 16:08:09 crc kubenswrapper[4767]: I0127 16:08:09.858343 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-94z5v" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.132651 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-hpg68" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.183896 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-tf9jw" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.199527 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" event={"ID":"9df4e8b9-adcf-4442-a0db-70f45bf9977d","Type":"ContainerStarted","Data":"d018f54d3c657b2469beeab106e018223e826273aa373e7525b37fcae3943399"} Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.199736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.216645 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-4vw6t" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.223609 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" podStartSLOduration=3.073039885 podStartE2EDuration="21.223589809s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.465840032 +0000 UTC m=+1093.854857545" lastFinishedPulling="2026-01-27 16:08:09.616389946 +0000 UTC m=+1112.005407469" observedRunningTime="2026-01-27 16:08:10.217492124 +0000 UTC m=+1112.606509647" watchObservedRunningTime="2026-01-27 16:08:10.223589809 +0000 UTC m=+1112.612607332" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.240312 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-gg6cd" Jan 27 16:08:10 crc kubenswrapper[4767]: I0127 16:08:10.309245 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-66576874d7-z5wns" Jan 27 16:08:12 crc kubenswrapper[4767]: I0127 16:08:12.212347 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" event={"ID":"6822744e-5d47-466b-9846-88e9c68a3aeb","Type":"ContainerStarted","Data":"c6e745fda666d3abc58df8133bb67fb4687f4eeef5278d725bcdae14d15eafac"} Jan 27 16:08:12 crc kubenswrapper[4767]: I0127 16:08:12.212912 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:08:12 crc kubenswrapper[4767]: I0127 16:08:12.227852 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" podStartSLOduration=3.600431815 podStartE2EDuration="23.227836487s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:07:51.471811094 +0000 UTC m=+1093.860828617" lastFinishedPulling="2026-01-27 16:08:11.099215756 +0000 UTC m=+1113.488233289" observedRunningTime="2026-01-27 16:08:12.224188032 +0000 UTC m=+1114.613205575" watchObservedRunningTime="2026-01-27 16:08:12.227836487 +0000 UTC m=+1114.616854010" Jan 27 16:08:19 crc kubenswrapper[4767]: I0127 16:08:19.687159 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-xf9r5" Jan 27 16:08:19 crc kubenswrapper[4767]: I0127 16:08:19.841157 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-xfptk" Jan 27 16:08:19 crc kubenswrapper[4767]: I0127 16:08:19.931465 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-dzhk4" Jan 27 16:08:20 crc kubenswrapper[4767]: I0127 16:08:20.159736 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qvnm9" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.141767 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.147772 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e093bca8-5087-47cd-a9af-719248b96d6d-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-8ksrz\" (UID: \"e093bca8-5087-47cd-a9af-719248b96d6d\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.295514 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.650968 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.655954 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/355ddf1c-4f8e-45ee-8f68-af3d0b4feb51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85489vr8\" (UID: \"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.757882 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz"] Jan 27 16:08:21 crc kubenswrapper[4767]: W0127 16:08:21.761631 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode093bca8_5087_47cd_a9af_719248b96d6d.slice/crio-c3cb819bef7cb9e15e0550842493d27fc2cf32dd9a21740cad491f5f8561120e WatchSource:0}: Error finding container c3cb819bef7cb9e15e0550842493d27fc2cf32dd9a21740cad491f5f8561120e: Status 404 returned error can't find the container with id c3cb819bef7cb9e15e0550842493d27fc2cf32dd9a21740cad491f5f8561120e Jan 27 16:08:21 crc kubenswrapper[4767]: I0127 16:08:21.795107 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.056292 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.056661 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.060284 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-metrics-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.060773 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8eade9eb-ffdd-43c3-b9ac-5522bc2218b8-webhook-certs\") pod \"openstack-operator-controller-manager-79b75b7c86-j2gvn\" (UID: \"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8\") " pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.224552 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.318481 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8"] Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.320099 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" event={"ID":"e093bca8-5087-47cd-a9af-719248b96d6d","Type":"ContainerStarted","Data":"c3cb819bef7cb9e15e0550842493d27fc2cf32dd9a21740cad491f5f8561120e"} Jan 27 16:08:22 crc kubenswrapper[4767]: W0127 16:08:22.739183 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eade9eb_ffdd_43c3_b9ac_5522bc2218b8.slice/crio-8620da1db2d970ae4f9aa380991709c281e1f709a8756de0e20afc25e4848dff WatchSource:0}: Error finding container 8620da1db2d970ae4f9aa380991709c281e1f709a8756de0e20afc25e4848dff: Status 404 returned error can't find the container with id 8620da1db2d970ae4f9aa380991709c281e1f709a8756de0e20afc25e4848dff Jan 27 16:08:22 crc kubenswrapper[4767]: I0127 16:08:22.744115 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn"] Jan 27 16:08:23 crc kubenswrapper[4767]: I0127 16:08:23.328033 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" event={"ID":"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8","Type":"ContainerStarted","Data":"8620da1db2d970ae4f9aa380991709c281e1f709a8756de0e20afc25e4848dff"} Jan 27 16:08:23 crc kubenswrapper[4767]: I0127 16:08:23.329005 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" event={"ID":"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51","Type":"ContainerStarted","Data":"73ae4ed1a9af853a22297646b6cbb1b87d16c5afd83e32081c06051188a5fb27"} Jan 27 16:08:24 crc kubenswrapper[4767]: I0127 16:08:24.350502 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" event={"ID":"8eade9eb-ffdd-43c3-b9ac-5522bc2218b8","Type":"ContainerStarted","Data":"e13ba0db194b34b317da336c9e79b3bc600c4f4c806021d43e9c6055a4fd8b5c"} Jan 27 16:08:26 crc kubenswrapper[4767]: I0127 16:08:26.365272 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:26 crc kubenswrapper[4767]: I0127 16:08:26.417548 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" podStartSLOduration=37.417519808 podStartE2EDuration="37.417519808s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:08:26.413642647 +0000 UTC m=+1128.802660210" watchObservedRunningTime="2026-01-27 16:08:26.417519808 +0000 UTC m=+1128.806537361" Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.404744 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" event={"ID":"e093bca8-5087-47cd-a9af-719248b96d6d","Type":"ContainerStarted","Data":"be2bed1535ef26ce695b874fa5297025323c301f106a2f92392c0c5d413b0096"} Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.405065 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.406990 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" event={"ID":"355ddf1c-4f8e-45ee-8f68-af3d0b4feb51","Type":"ContainerStarted","Data":"ea30416c766f3123d76c39b2c7174724123af40700c9af5b4033ec96d6211bd8"} Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.407368 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.422535 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" podStartSLOduration=33.4725934 podStartE2EDuration="42.422521564s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:08:21.764008524 +0000 UTC m=+1124.153026047" lastFinishedPulling="2026-01-27 16:08:30.713936688 +0000 UTC m=+1133.102954211" observedRunningTime="2026-01-27 16:08:31.418857399 +0000 UTC m=+1133.807874922" watchObservedRunningTime="2026-01-27 16:08:31.422521564 +0000 UTC m=+1133.811539087" Jan 27 16:08:31 crc kubenswrapper[4767]: I0127 16:08:31.451186 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" podStartSLOduration=34.063539309 podStartE2EDuration="42.451167464s" podCreationTimestamp="2026-01-27 16:07:49 +0000 UTC" firstStartedPulling="2026-01-27 16:08:22.334040194 +0000 UTC m=+1124.723057717" lastFinishedPulling="2026-01-27 16:08:30.721668349 +0000 UTC m=+1133.110685872" observedRunningTime="2026-01-27 16:08:31.445595864 +0000 UTC m=+1133.834613387" watchObservedRunningTime="2026-01-27 16:08:31.451167464 +0000 UTC m=+1133.840185007" Jan 27 16:08:32 crc kubenswrapper[4767]: I0127 16:08:32.230223 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79b75b7c86-j2gvn" Jan 27 16:08:41 crc kubenswrapper[4767]: I0127 16:08:41.304580 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-8ksrz" Jan 27 16:08:41 crc kubenswrapper[4767]: I0127 16:08:41.801483 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85489vr8" Jan 27 16:09:54 crc kubenswrapper[4767]: I0127 16:09:54.857772 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:09:54 crc kubenswrapper[4767]: I0127 16:09:54.858397 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:10:24 crc kubenswrapper[4767]: I0127 16:10:24.857816 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:10:24 crc kubenswrapper[4767]: I0127 16:10:24.858346 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:10:54 crc kubenswrapper[4767]: I0127 16:10:54.858105 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:10:54 crc kubenswrapper[4767]: I0127 16:10:54.858761 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:10:54 crc kubenswrapper[4767]: I0127 16:10:54.858826 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:10:54 crc kubenswrapper[4767]: I0127 16:10:54.860030 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:10:54 crc kubenswrapper[4767]: I0127 16:10:54.860118 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc" gracePeriod=600 Jan 27 16:10:55 crc kubenswrapper[4767]: I0127 16:10:55.601149 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc" exitCode=0 Jan 27 16:10:55 crc kubenswrapper[4767]: I0127 16:10:55.601826 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc"} Jan 27 16:10:55 crc kubenswrapper[4767]: I0127 16:10:55.601957 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda"} Jan 27 16:10:55 crc kubenswrapper[4767]: I0127 16:10:55.602049 4767 scope.go:117] "RemoveContainer" containerID="f3d25f07cf5921e6e421aefa0d813e2909e28e1abdde0dc623cba28c2a963a96" Jan 27 16:13:24 crc kubenswrapper[4767]: I0127 16:13:24.858315 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:13:24 crc kubenswrapper[4767]: I0127 16:13:24.860176 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.760843 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.763275 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.776313 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.864641 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.864730 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gbl\" (UniqueName: \"kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.864896 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.966180 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gbl\" (UniqueName: \"kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.966281 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.966391 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.966765 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.966876 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:36 crc kubenswrapper[4767]: I0127 16:13:36.986586 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gbl\" (UniqueName: \"kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl\") pod \"redhat-operators-qllwn\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:37 crc kubenswrapper[4767]: I0127 16:13:37.114555 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:37 crc kubenswrapper[4767]: I0127 16:13:37.554742 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:38 crc kubenswrapper[4767]: I0127 16:13:38.040466 4767 generic.go:334] "Generic (PLEG): container finished" podID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerID="9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01" exitCode=0 Jan 27 16:13:38 crc kubenswrapper[4767]: I0127 16:13:38.040696 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerDied","Data":"9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01"} Jan 27 16:13:38 crc kubenswrapper[4767]: I0127 16:13:38.040726 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerStarted","Data":"b2e1fdf430cccdacea093be4d36102078c2673d32afe7c01892c5e4e422ac982"} Jan 27 16:13:38 crc kubenswrapper[4767]: I0127 16:13:38.042756 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:13:39 crc kubenswrapper[4767]: I0127 16:13:39.053791 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerStarted","Data":"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d"} Jan 27 16:13:40 crc kubenswrapper[4767]: I0127 16:13:40.065425 4767 generic.go:334] "Generic (PLEG): container finished" podID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerID="2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d" exitCode=0 Jan 27 16:13:40 crc kubenswrapper[4767]: I0127 16:13:40.065515 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerDied","Data":"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d"} Jan 27 16:13:41 crc kubenswrapper[4767]: I0127 16:13:41.074649 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerStarted","Data":"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38"} Jan 27 16:13:41 crc kubenswrapper[4767]: I0127 16:13:41.097718 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qllwn" podStartSLOduration=2.665402216 podStartE2EDuration="5.097696542s" podCreationTimestamp="2026-01-27 16:13:36 +0000 UTC" firstStartedPulling="2026-01-27 16:13:38.042458302 +0000 UTC m=+1440.431475825" lastFinishedPulling="2026-01-27 16:13:40.474752628 +0000 UTC m=+1442.863770151" observedRunningTime="2026-01-27 16:13:41.091118965 +0000 UTC m=+1443.480136488" watchObservedRunningTime="2026-01-27 16:13:41.097696542 +0000 UTC m=+1443.486714065" Jan 27 16:13:47 crc kubenswrapper[4767]: I0127 16:13:47.115528 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:47 crc kubenswrapper[4767]: I0127 16:13:47.116087 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:47 crc kubenswrapper[4767]: I0127 16:13:47.154676 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:48 crc kubenswrapper[4767]: I0127 16:13:48.173524 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:48 crc kubenswrapper[4767]: I0127 16:13:48.235828 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:50 crc kubenswrapper[4767]: I0127 16:13:50.146517 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qllwn" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="registry-server" containerID="cri-o://12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38" gracePeriod=2 Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.635177 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.665949 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gbl\" (UniqueName: \"kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl\") pod \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.666051 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities\") pod \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.666119 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content\") pod \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\" (UID: \"13974cfc-0b0d-4b61-b9e3-a80ea628d3df\") " Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.666888 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities" (OuterVolumeSpecName: "utilities") pod "13974cfc-0b0d-4b61-b9e3-a80ea628d3df" (UID: "13974cfc-0b0d-4b61-b9e3-a80ea628d3df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.672773 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl" (OuterVolumeSpecName: "kube-api-access-r8gbl") pod "13974cfc-0b0d-4b61-b9e3-a80ea628d3df" (UID: "13974cfc-0b0d-4b61-b9e3-a80ea628d3df"). InnerVolumeSpecName "kube-api-access-r8gbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.767992 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gbl\" (UniqueName: \"kubernetes.io/projected/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-kube-api-access-r8gbl\") on node \"crc\" DevicePath \"\"" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.768042 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.769464 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13974cfc-0b0d-4b61-b9e3-a80ea628d3df" (UID: "13974cfc-0b0d-4b61-b9e3-a80ea628d3df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:13:51 crc kubenswrapper[4767]: I0127 16:13:51.869219 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13974cfc-0b0d-4b61-b9e3-a80ea628d3df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.166394 4767 generic.go:334] "Generic (PLEG): container finished" podID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerID="12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38" exitCode=0 Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.166440 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerDied","Data":"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38"} Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.166499 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qllwn" event={"ID":"13974cfc-0b0d-4b61-b9e3-a80ea628d3df","Type":"ContainerDied","Data":"b2e1fdf430cccdacea093be4d36102078c2673d32afe7c01892c5e4e422ac982"} Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.166517 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qllwn" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.166530 4767 scope.go:117] "RemoveContainer" containerID="12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.193393 4767 scope.go:117] "RemoveContainer" containerID="2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.206070 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.211891 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qllwn"] Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.226502 4767 scope.go:117] "RemoveContainer" containerID="9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.241742 4767 scope.go:117] "RemoveContainer" containerID="12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38" Jan 27 16:13:52 crc kubenswrapper[4767]: E0127 16:13:52.242172 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38\": container with ID starting with 12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38 not found: ID does not exist" containerID="12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.242302 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38"} err="failed to get container status \"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38\": rpc error: code = NotFound desc = could not find container \"12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38\": container with ID starting with 12eba7b5764a13a948d5ecc47c30e763bb4f0ade06df83298cabdf90af7d8b38 not found: ID does not exist" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.242339 4767 scope.go:117] "RemoveContainer" containerID="2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d" Jan 27 16:13:52 crc kubenswrapper[4767]: E0127 16:13:52.242706 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d\": container with ID starting with 2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d not found: ID does not exist" containerID="2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.242735 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d"} err="failed to get container status \"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d\": rpc error: code = NotFound desc = could not find container \"2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d\": container with ID starting with 2f32b5564f128d3693bcea368a46fe93966777f4974f2bd69b0ab2e384ae2d0d not found: ID does not exist" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.242755 4767 scope.go:117] "RemoveContainer" containerID="9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01" Jan 27 16:13:52 crc kubenswrapper[4767]: E0127 16:13:52.243023 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01\": container with ID starting with 9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01 not found: ID does not exist" containerID="9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.243052 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01"} err="failed to get container status \"9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01\": rpc error: code = NotFound desc = could not find container \"9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01\": container with ID starting with 9d74935760fac33fe1b6212cbeba3877298d5368a921c84bc0ad634ad77dba01 not found: ID does not exist" Jan 27 16:13:52 crc kubenswrapper[4767]: I0127 16:13:52.338039 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" path="/var/lib/kubelet/pods/13974cfc-0b0d-4b61-b9e3-a80ea628d3df/volumes" Jan 27 16:13:54 crc kubenswrapper[4767]: I0127 16:13:54.858229 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:13:54 crc kubenswrapper[4767]: I0127 16:13:54.858511 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:14:24 crc kubenswrapper[4767]: I0127 16:14:24.857836 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:14:24 crc kubenswrapper[4767]: I0127 16:14:24.858704 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:14:24 crc kubenswrapper[4767]: I0127 16:14:24.858810 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:14:24 crc kubenswrapper[4767]: I0127 16:14:24.859984 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:14:24 crc kubenswrapper[4767]: I0127 16:14:24.860166 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda" gracePeriod=600 Jan 27 16:14:25 crc kubenswrapper[4767]: I0127 16:14:25.386790 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda" exitCode=0 Jan 27 16:14:25 crc kubenswrapper[4767]: I0127 16:14:25.386819 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda"} Jan 27 16:14:25 crc kubenswrapper[4767]: I0127 16:14:25.387406 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf"} Jan 27 16:14:25 crc kubenswrapper[4767]: I0127 16:14:25.387431 4767 scope.go:117] "RemoveContainer" containerID="5eae9696c039bc84d18bc2b4b0801483abe347db02d34f8f9f3cf2ec17b09fcc" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.161073 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k"] Jan 27 16:15:00 crc kubenswrapper[4767]: E0127 16:15:00.163066 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="extract-utilities" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.163087 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="extract-utilities" Jan 27 16:15:00 crc kubenswrapper[4767]: E0127 16:15:00.163120 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="extract-content" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.163126 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="extract-content" Jan 27 16:15:00 crc kubenswrapper[4767]: E0127 16:15:00.163136 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.163142 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.163368 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="13974cfc-0b0d-4b61-b9e3-a80ea628d3df" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.163908 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.172946 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.173175 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.173646 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k"] Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.261425 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.261804 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.261861 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr7wr\" (UniqueName: \"kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.362687 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr7wr\" (UniqueName: \"kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.362832 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.362875 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.364077 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.369322 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.380445 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr7wr\" (UniqueName: \"kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr\") pod \"collect-profiles-29492175-x2p7k\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.484870 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:00 crc kubenswrapper[4767]: I0127 16:15:00.904866 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k"] Jan 27 16:15:01 crc kubenswrapper[4767]: I0127 16:15:01.652984 4767 generic.go:334] "Generic (PLEG): container finished" podID="696e1b3d-2da4-4734-9383-43f8c13791fe" containerID="a44602f05d99c71cfa1456833dceb348ad4aec48b3cbfeeaaf6a1b7cb83e53ad" exitCode=0 Jan 27 16:15:01 crc kubenswrapper[4767]: I0127 16:15:01.654424 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" event={"ID":"696e1b3d-2da4-4734-9383-43f8c13791fe","Type":"ContainerDied","Data":"a44602f05d99c71cfa1456833dceb348ad4aec48b3cbfeeaaf6a1b7cb83e53ad"} Jan 27 16:15:01 crc kubenswrapper[4767]: I0127 16:15:01.654475 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" event={"ID":"696e1b3d-2da4-4734-9383-43f8c13791fe","Type":"ContainerStarted","Data":"0198261e93b6a2f0649c078f1b9dd30c5c118e024a61edfc12cd1fe0b6e509da"} Jan 27 16:15:02 crc kubenswrapper[4767]: I0127 16:15:02.921036 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.000894 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr7wr\" (UniqueName: \"kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr\") pod \"696e1b3d-2da4-4734-9383-43f8c13791fe\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.000960 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume\") pod \"696e1b3d-2da4-4734-9383-43f8c13791fe\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.001014 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume\") pod \"696e1b3d-2da4-4734-9383-43f8c13791fe\" (UID: \"696e1b3d-2da4-4734-9383-43f8c13791fe\") " Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.001854 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "696e1b3d-2da4-4734-9383-43f8c13791fe" (UID: "696e1b3d-2da4-4734-9383-43f8c13791fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.006594 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr" (OuterVolumeSpecName: "kube-api-access-rr7wr") pod "696e1b3d-2da4-4734-9383-43f8c13791fe" (UID: "696e1b3d-2da4-4734-9383-43f8c13791fe"). InnerVolumeSpecName "kube-api-access-rr7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.007765 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "696e1b3d-2da4-4734-9383-43f8c13791fe" (UID: "696e1b3d-2da4-4734-9383-43f8c13791fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.102358 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr7wr\" (UniqueName: \"kubernetes.io/projected/696e1b3d-2da4-4734-9383-43f8c13791fe-kube-api-access-rr7wr\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.102399 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696e1b3d-2da4-4734-9383-43f8c13791fe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.102408 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/696e1b3d-2da4-4734-9383-43f8c13791fe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.676490 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" event={"ID":"696e1b3d-2da4-4734-9383-43f8c13791fe","Type":"ContainerDied","Data":"0198261e93b6a2f0649c078f1b9dd30c5c118e024a61edfc12cd1fe0b6e509da"} Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.676540 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k" Jan 27 16:15:03 crc kubenswrapper[4767]: I0127 16:15:03.676563 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0198261e93b6a2f0649c078f1b9dd30c5c118e024a61edfc12cd1fe0b6e509da" Jan 27 16:15:50 crc kubenswrapper[4767]: I0127 16:15:50.965216 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:15:50 crc kubenswrapper[4767]: E0127 16:15:50.965980 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696e1b3d-2da4-4734-9383-43f8c13791fe" containerName="collect-profiles" Jan 27 16:15:50 crc kubenswrapper[4767]: I0127 16:15:50.965993 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="696e1b3d-2da4-4734-9383-43f8c13791fe" containerName="collect-profiles" Jan 27 16:15:50 crc kubenswrapper[4767]: I0127 16:15:50.966136 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="696e1b3d-2da4-4734-9383-43f8c13791fe" containerName="collect-profiles" Jan 27 16:15:50 crc kubenswrapper[4767]: I0127 16:15:50.967220 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:50 crc kubenswrapper[4767]: I0127 16:15:50.978412 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.087705 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.087754 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862mr\" (UniqueName: \"kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.087861 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.189464 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862mr\" (UniqueName: \"kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.190353 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.190423 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.190966 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.191282 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.220687 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862mr\" (UniqueName: \"kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr\") pod \"redhat-marketplace-2wqqn\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.283762 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:15:51 crc kubenswrapper[4767]: I0127 16:15:51.742329 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:15:52 crc kubenswrapper[4767]: I0127 16:15:52.021498 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerID="3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25" exitCode=0 Jan 27 16:15:52 crc kubenswrapper[4767]: I0127 16:15:52.021549 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerDied","Data":"3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25"} Jan 27 16:15:52 crc kubenswrapper[4767]: I0127 16:15:52.021580 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerStarted","Data":"07f2cc98e8fd96bb6f27abb741673b1fc0c517b70d3b19618e7ed1e400c41676"} Jan 27 16:15:54 crc kubenswrapper[4767]: I0127 16:15:54.036546 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerID="c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b" exitCode=0 Jan 27 16:15:54 crc kubenswrapper[4767]: I0127 16:15:54.036658 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerDied","Data":"c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b"} Jan 27 16:15:55 crc kubenswrapper[4767]: I0127 16:15:55.046563 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerStarted","Data":"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c"} Jan 27 16:15:55 crc kubenswrapper[4767]: I0127 16:15:55.073367 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2wqqn" podStartSLOduration=2.641396055 podStartE2EDuration="5.073343422s" podCreationTimestamp="2026-01-27 16:15:50 +0000 UTC" firstStartedPulling="2026-01-27 16:15:52.022855719 +0000 UTC m=+1574.411873242" lastFinishedPulling="2026-01-27 16:15:54.454803046 +0000 UTC m=+1576.843820609" observedRunningTime="2026-01-27 16:15:55.066632633 +0000 UTC m=+1577.455650166" watchObservedRunningTime="2026-01-27 16:15:55.073343422 +0000 UTC m=+1577.462360955" Jan 27 16:16:01 crc kubenswrapper[4767]: I0127 16:16:01.283985 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:01 crc kubenswrapper[4767]: I0127 16:16:01.284239 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:01 crc kubenswrapper[4767]: I0127 16:16:01.345481 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:02 crc kubenswrapper[4767]: I0127 16:16:02.156166 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:02 crc kubenswrapper[4767]: I0127 16:16:02.221422 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.111383 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2wqqn" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="registry-server" containerID="cri-o://6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c" gracePeriod=2 Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.519481 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.684382 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862mr\" (UniqueName: \"kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr\") pod \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.684450 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities\") pod \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.684898 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content\") pod \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\" (UID: \"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec\") " Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.685421 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities" (OuterVolumeSpecName: "utilities") pod "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" (UID: "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.693462 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr" (OuterVolumeSpecName: "kube-api-access-862mr") pod "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" (UID: "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec"). InnerVolumeSpecName "kube-api-access-862mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.711141 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" (UID: "3b3dd263-c4c0-4e61-8962-1b54eeaa76ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.786505 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.786541 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862mr\" (UniqueName: \"kubernetes.io/projected/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-kube-api-access-862mr\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:04 crc kubenswrapper[4767]: I0127 16:16:04.786557 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.119277 4767 generic.go:334] "Generic (PLEG): container finished" podID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerID="6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c" exitCode=0 Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.119323 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerDied","Data":"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c"} Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.119352 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2wqqn" event={"ID":"3b3dd263-c4c0-4e61-8962-1b54eeaa76ec","Type":"ContainerDied","Data":"07f2cc98e8fd96bb6f27abb741673b1fc0c517b70d3b19618e7ed1e400c41676"} Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.119371 4767 scope.go:117] "RemoveContainer" containerID="6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.119373 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2wqqn" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.138006 4767 scope.go:117] "RemoveContainer" containerID="c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.152067 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.158893 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2wqqn"] Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.173654 4767 scope.go:117] "RemoveContainer" containerID="3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.190902 4767 scope.go:117] "RemoveContainer" containerID="6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c" Jan 27 16:16:05 crc kubenswrapper[4767]: E0127 16:16:05.191431 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c\": container with ID starting with 6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c not found: ID does not exist" containerID="6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.191459 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c"} err="failed to get container status \"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c\": rpc error: code = NotFound desc = could not find container \"6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c\": container with ID starting with 6378035909f0215709f4d834d78369731bdfee2cd1acbdb70e5c7a432487351c not found: ID does not exist" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.191478 4767 scope.go:117] "RemoveContainer" containerID="c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b" Jan 27 16:16:05 crc kubenswrapper[4767]: E0127 16:16:05.191768 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b\": container with ID starting with c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b not found: ID does not exist" containerID="c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.191810 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b"} err="failed to get container status \"c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b\": rpc error: code = NotFound desc = could not find container \"c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b\": container with ID starting with c61b892ada9b13f889c57bbba0d2175db4383aab13eb12a639832cc50291202b not found: ID does not exist" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.191838 4767 scope.go:117] "RemoveContainer" containerID="3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25" Jan 27 16:16:05 crc kubenswrapper[4767]: E0127 16:16:05.192157 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25\": container with ID starting with 3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25 not found: ID does not exist" containerID="3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25" Jan 27 16:16:05 crc kubenswrapper[4767]: I0127 16:16:05.192192 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25"} err="failed to get container status \"3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25\": rpc error: code = NotFound desc = could not find container \"3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25\": container with ID starting with 3f7c9fe2b47110c0d2c8ffa8266610a9d350fc4b7db89bbb29808c75cd51fd25 not found: ID does not exist" Jan 27 16:16:06 crc kubenswrapper[4767]: I0127 16:16:06.336854 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" path="/var/lib/kubelet/pods/3b3dd263-c4c0-4e61-8962-1b54eeaa76ec/volumes" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.931777 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:41 crc kubenswrapper[4767]: E0127 16:16:41.933997 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="extract-utilities" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.934080 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="extract-utilities" Jan 27 16:16:41 crc kubenswrapper[4767]: E0127 16:16:41.934149 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="registry-server" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.938301 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="registry-server" Jan 27 16:16:41 crc kubenswrapper[4767]: E0127 16:16:41.938533 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="extract-content" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.938619 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="extract-content" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.938995 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3dd263-c4c0-4e61-8962-1b54eeaa76ec" containerName="registry-server" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.940419 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:41 crc kubenswrapper[4767]: I0127 16:16:41.958036 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.103069 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.103150 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.103675 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjj9\" (UniqueName: \"kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.205463 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjjj9\" (UniqueName: \"kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.205599 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.205643 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.206228 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.206358 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.224408 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjjj9\" (UniqueName: \"kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9\") pod \"community-operators-jgkwk\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.261725 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:42 crc kubenswrapper[4767]: I0127 16:16:42.815102 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:43 crc kubenswrapper[4767]: I0127 16:16:43.439800 4767 generic.go:334] "Generic (PLEG): container finished" podID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerID="098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063" exitCode=0 Jan 27 16:16:43 crc kubenswrapper[4767]: I0127 16:16:43.439861 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerDied","Data":"098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063"} Jan 27 16:16:43 crc kubenswrapper[4767]: I0127 16:16:43.440123 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerStarted","Data":"e01ce0c540bb67f7257e38a23bbac2e85c4c14a6299f076208716403579aea29"} Jan 27 16:16:45 crc kubenswrapper[4767]: I0127 16:16:45.454059 4767 generic.go:334] "Generic (PLEG): container finished" podID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerID="7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0" exitCode=0 Jan 27 16:16:45 crc kubenswrapper[4767]: I0127 16:16:45.454108 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerDied","Data":"7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0"} Jan 27 16:16:46 crc kubenswrapper[4767]: I0127 16:16:46.485543 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerStarted","Data":"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e"} Jan 27 16:16:46 crc kubenswrapper[4767]: I0127 16:16:46.522941 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgkwk" podStartSLOduration=3.075867 podStartE2EDuration="5.522918545s" podCreationTimestamp="2026-01-27 16:16:41 +0000 UTC" firstStartedPulling="2026-01-27 16:16:43.441802164 +0000 UTC m=+1625.830819697" lastFinishedPulling="2026-01-27 16:16:45.888853719 +0000 UTC m=+1628.277871242" observedRunningTime="2026-01-27 16:16:46.511682207 +0000 UTC m=+1628.900699740" watchObservedRunningTime="2026-01-27 16:16:46.522918545 +0000 UTC m=+1628.911936078" Jan 27 16:16:52 crc kubenswrapper[4767]: I0127 16:16:52.262272 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:52 crc kubenswrapper[4767]: I0127 16:16:52.262611 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:52 crc kubenswrapper[4767]: I0127 16:16:52.308056 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:52 crc kubenswrapper[4767]: I0127 16:16:52.563637 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:52 crc kubenswrapper[4767]: I0127 16:16:52.606884 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:54 crc kubenswrapper[4767]: I0127 16:16:54.539287 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgkwk" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="registry-server" containerID="cri-o://5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e" gracePeriod=2 Jan 27 16:16:54 crc kubenswrapper[4767]: I0127 16:16:54.858094 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:16:54 crc kubenswrapper[4767]: I0127 16:16:54.858517 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:16:54 crc kubenswrapper[4767]: I0127 16:16:54.953416 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.000050 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjj9\" (UniqueName: \"kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9\") pod \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.000116 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content\") pod \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.000143 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities\") pod \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\" (UID: \"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1\") " Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.001104 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities" (OuterVolumeSpecName: "utilities") pod "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" (UID: "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.007386 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9" (OuterVolumeSpecName: "kube-api-access-zjjj9") pod "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" (UID: "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1"). InnerVolumeSpecName "kube-api-access-zjjj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.073820 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" (UID: "f8b5b9f7-f652-49d1-99ed-d215c8ad07d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.102300 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjj9\" (UniqueName: \"kubernetes.io/projected/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-kube-api-access-zjjj9\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.102465 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.102479 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.552377 4767 generic.go:334] "Generic (PLEG): container finished" podID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerID="5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e" exitCode=0 Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.552440 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgkwk" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.552483 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerDied","Data":"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e"} Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.552564 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgkwk" event={"ID":"f8b5b9f7-f652-49d1-99ed-d215c8ad07d1","Type":"ContainerDied","Data":"e01ce0c540bb67f7257e38a23bbac2e85c4c14a6299f076208716403579aea29"} Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.552597 4767 scope.go:117] "RemoveContainer" containerID="5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.574371 4767 scope.go:117] "RemoveContainer" containerID="7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.607420 4767 scope.go:117] "RemoveContainer" containerID="098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.618864 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.624777 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgkwk"] Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.635245 4767 scope.go:117] "RemoveContainer" containerID="5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e" Jan 27 16:16:55 crc kubenswrapper[4767]: E0127 16:16:55.635762 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e\": container with ID starting with 5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e not found: ID does not exist" containerID="5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.635796 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e"} err="failed to get container status \"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e\": rpc error: code = NotFound desc = could not find container \"5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e\": container with ID starting with 5a74e404d01112e165e4304059c243c1610582acdf1a8c7788c0403fbaafac5e not found: ID does not exist" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.635815 4767 scope.go:117] "RemoveContainer" containerID="7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0" Jan 27 16:16:55 crc kubenswrapper[4767]: E0127 16:16:55.636279 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0\": container with ID starting with 7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0 not found: ID does not exist" containerID="7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.636309 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0"} err="failed to get container status \"7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0\": rpc error: code = NotFound desc = could not find container \"7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0\": container with ID starting with 7436d9691c01115d5ba9d82f4bd960ddee5b593c48896e39c323aaa29c7538e0 not found: ID does not exist" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.636323 4767 scope.go:117] "RemoveContainer" containerID="098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063" Jan 27 16:16:55 crc kubenswrapper[4767]: E0127 16:16:55.636610 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063\": container with ID starting with 098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063 not found: ID does not exist" containerID="098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063" Jan 27 16:16:55 crc kubenswrapper[4767]: I0127 16:16:55.636639 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063"} err="failed to get container status \"098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063\": rpc error: code = NotFound desc = could not find container \"098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063\": container with ID starting with 098a8793eaed3437672ab14000efcbb55f031b1c0145d5a1d0b14b57df596063 not found: ID does not exist" Jan 27 16:16:56 crc kubenswrapper[4767]: I0127 16:16:56.341101 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" path="/var/lib/kubelet/pods/f8b5b9f7-f652-49d1-99ed-d215c8ad07d1/volumes" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.969234 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:02 crc kubenswrapper[4767]: E0127 16:17:02.970997 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="extract-utilities" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.971102 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="extract-utilities" Jan 27 16:17:02 crc kubenswrapper[4767]: E0127 16:17:02.971216 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="extract-content" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.971282 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="extract-content" Jan 27 16:17:02 crc kubenswrapper[4767]: E0127 16:17:02.971356 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="registry-server" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.971424 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="registry-server" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.971702 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b5b9f7-f652-49d1-99ed-d215c8ad07d1" containerName="registry-server" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.972880 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:02 crc kubenswrapper[4767]: I0127 16:17:02.980072 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.129309 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npqdw\" (UniqueName: \"kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.129386 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.129407 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.230713 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.230787 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.230950 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npqdw\" (UniqueName: \"kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.231350 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.231387 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.254398 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npqdw\" (UniqueName: \"kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw\") pod \"certified-operators-z95c2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.301085 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:03 crc kubenswrapper[4767]: I0127 16:17:03.781726 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:04 crc kubenswrapper[4767]: I0127 16:17:04.643804 4767 generic.go:334] "Generic (PLEG): container finished" podID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerID="3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97" exitCode=0 Jan 27 16:17:04 crc kubenswrapper[4767]: I0127 16:17:04.643917 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerDied","Data":"3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97"} Jan 27 16:17:04 crc kubenswrapper[4767]: I0127 16:17:04.644268 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerStarted","Data":"1a8cd6d2a11772c9e043a83ece2a9ae2e225e2d5cd0d2083dd01a710f1627128"} Jan 27 16:17:06 crc kubenswrapper[4767]: I0127 16:17:06.662356 4767 generic.go:334] "Generic (PLEG): container finished" podID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerID="d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a" exitCode=0 Jan 27 16:17:06 crc kubenswrapper[4767]: I0127 16:17:06.662455 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerDied","Data":"d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a"} Jan 27 16:17:07 crc kubenswrapper[4767]: I0127 16:17:07.672846 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerStarted","Data":"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579"} Jan 27 16:17:07 crc kubenswrapper[4767]: I0127 16:17:07.697789 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z95c2" podStartSLOduration=3.270744907 podStartE2EDuration="5.697765487s" podCreationTimestamp="2026-01-27 16:17:02 +0000 UTC" firstStartedPulling="2026-01-27 16:17:04.646037047 +0000 UTC m=+1647.035054600" lastFinishedPulling="2026-01-27 16:17:07.073057657 +0000 UTC m=+1649.462075180" observedRunningTime="2026-01-27 16:17:07.69538178 +0000 UTC m=+1650.084399323" watchObservedRunningTime="2026-01-27 16:17:07.697765487 +0000 UTC m=+1650.086783050" Jan 27 16:17:13 crc kubenswrapper[4767]: I0127 16:17:13.302575 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:13 crc kubenswrapper[4767]: I0127 16:17:13.303147 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:13 crc kubenswrapper[4767]: I0127 16:17:13.354804 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:13 crc kubenswrapper[4767]: I0127 16:17:13.762687 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:13 crc kubenswrapper[4767]: I0127 16:17:13.818579 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:15 crc kubenswrapper[4767]: I0127 16:17:15.732652 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z95c2" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="registry-server" containerID="cri-o://a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579" gracePeriod=2 Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.116788 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.229111 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content\") pod \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.229260 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npqdw\" (UniqueName: \"kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw\") pod \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.229344 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities\") pod \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\" (UID: \"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2\") " Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.230481 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities" (OuterVolumeSpecName: "utilities") pod "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" (UID: "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.234645 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw" (OuterVolumeSpecName: "kube-api-access-npqdw") pod "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" (UID: "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2"). InnerVolumeSpecName "kube-api-access-npqdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.276985 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" (UID: "fbc963d8-57a3-4a96-b9bb-a3b96986a6e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.330653 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.330690 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npqdw\" (UniqueName: \"kubernetes.io/projected/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-kube-api-access-npqdw\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.330703 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.742050 4767 generic.go:334] "Generic (PLEG): container finished" podID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerID="a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579" exitCode=0 Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.742107 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerDied","Data":"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579"} Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.742137 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z95c2" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.742152 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z95c2" event={"ID":"fbc963d8-57a3-4a96-b9bb-a3b96986a6e2","Type":"ContainerDied","Data":"1a8cd6d2a11772c9e043a83ece2a9ae2e225e2d5cd0d2083dd01a710f1627128"} Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.742185 4767 scope.go:117] "RemoveContainer" containerID="a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.764932 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.765384 4767 scope.go:117] "RemoveContainer" containerID="d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.775323 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z95c2"] Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.785036 4767 scope.go:117] "RemoveContainer" containerID="3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.811541 4767 scope.go:117] "RemoveContainer" containerID="a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579" Jan 27 16:17:16 crc kubenswrapper[4767]: E0127 16:17:16.812053 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579\": container with ID starting with a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579 not found: ID does not exist" containerID="a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.812121 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579"} err="failed to get container status \"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579\": rpc error: code = NotFound desc = could not find container \"a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579\": container with ID starting with a1f37de3be071eb6f3402e78acb838d7cef15a46348b71de3bb25ed3cb34b579 not found: ID does not exist" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.812147 4767 scope.go:117] "RemoveContainer" containerID="d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a" Jan 27 16:17:16 crc kubenswrapper[4767]: E0127 16:17:16.812535 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a\": container with ID starting with d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a not found: ID does not exist" containerID="d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.812570 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a"} err="failed to get container status \"d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a\": rpc error: code = NotFound desc = could not find container \"d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a\": container with ID starting with d060a16b6552810b9de13570be8c43cdd7ea6b5e7d3da53cad364436613a7a0a not found: ID does not exist" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.812596 4767 scope.go:117] "RemoveContainer" containerID="3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97" Jan 27 16:17:16 crc kubenswrapper[4767]: E0127 16:17:16.812903 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97\": container with ID starting with 3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97 not found: ID does not exist" containerID="3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97" Jan 27 16:17:16 crc kubenswrapper[4767]: I0127 16:17:16.812938 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97"} err="failed to get container status \"3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97\": rpc error: code = NotFound desc = could not find container \"3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97\": container with ID starting with 3d1dfcff6cbbdf83cc0712b5a55434de6d6a5f66660d1d809e2762c946619b97 not found: ID does not exist" Jan 27 16:17:18 crc kubenswrapper[4767]: I0127 16:17:18.341518 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" path="/var/lib/kubelet/pods/fbc963d8-57a3-4a96-b9bb-a3b96986a6e2/volumes" Jan 27 16:17:24 crc kubenswrapper[4767]: I0127 16:17:24.858121 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:17:24 crc kubenswrapper[4767]: I0127 16:17:24.859359 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:17:54 crc kubenswrapper[4767]: I0127 16:17:54.857920 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:17:54 crc kubenswrapper[4767]: I0127 16:17:54.858478 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:17:54 crc kubenswrapper[4767]: I0127 16:17:54.858526 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:17:54 crc kubenswrapper[4767]: I0127 16:17:54.859166 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:17:54 crc kubenswrapper[4767]: I0127 16:17:54.859312 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" gracePeriod=600 Jan 27 16:17:54 crc kubenswrapper[4767]: E0127 16:17:54.990225 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:17:55 crc kubenswrapper[4767]: I0127 16:17:55.026388 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" exitCode=0 Jan 27 16:17:55 crc kubenswrapper[4767]: I0127 16:17:55.026429 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf"} Jan 27 16:17:55 crc kubenswrapper[4767]: I0127 16:17:55.026461 4767 scope.go:117] "RemoveContainer" containerID="87a4d484001a9613919682c6b99bf0f8377ea49c3768825aaeaa01ed98151eda" Jan 27 16:17:55 crc kubenswrapper[4767]: I0127 16:17:55.026932 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:17:55 crc kubenswrapper[4767]: E0127 16:17:55.027114 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:18:06 crc kubenswrapper[4767]: I0127 16:18:06.326127 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:18:06 crc kubenswrapper[4767]: E0127 16:18:06.327300 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:18:21 crc kubenswrapper[4767]: I0127 16:18:21.325609 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:18:21 crc kubenswrapper[4767]: E0127 16:18:21.326593 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:18:34 crc kubenswrapper[4767]: I0127 16:18:34.325691 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:18:34 crc kubenswrapper[4767]: E0127 16:18:34.326679 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:18:48 crc kubenswrapper[4767]: I0127 16:18:48.332548 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:18:48 crc kubenswrapper[4767]: E0127 16:18:48.333786 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:19:02 crc kubenswrapper[4767]: I0127 16:19:02.325549 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:19:02 crc kubenswrapper[4767]: E0127 16:19:02.326288 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:19:16 crc kubenswrapper[4767]: I0127 16:19:16.326012 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:19:16 crc kubenswrapper[4767]: E0127 16:19:16.329731 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:19:27 crc kubenswrapper[4767]: I0127 16:19:27.344300 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:19:27 crc kubenswrapper[4767]: E0127 16:19:27.345268 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:19:40 crc kubenswrapper[4767]: I0127 16:19:40.326147 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:19:40 crc kubenswrapper[4767]: E0127 16:19:40.326939 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:19:53 crc kubenswrapper[4767]: I0127 16:19:53.325467 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:19:53 crc kubenswrapper[4767]: E0127 16:19:53.326150 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:20:08 crc kubenswrapper[4767]: I0127 16:20:08.328949 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:20:08 crc kubenswrapper[4767]: E0127 16:20:08.329668 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:20:21 crc kubenswrapper[4767]: I0127 16:20:21.325733 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:20:21 crc kubenswrapper[4767]: E0127 16:20:21.326648 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:20:36 crc kubenswrapper[4767]: I0127 16:20:36.325529 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:20:36 crc kubenswrapper[4767]: E0127 16:20:36.326301 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:20:49 crc kubenswrapper[4767]: I0127 16:20:49.325636 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:20:49 crc kubenswrapper[4767]: E0127 16:20:49.326544 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:21:04 crc kubenswrapper[4767]: I0127 16:21:04.325622 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:21:04 crc kubenswrapper[4767]: E0127 16:21:04.326338 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:21:16 crc kubenswrapper[4767]: I0127 16:21:16.325462 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:21:16 crc kubenswrapper[4767]: E0127 16:21:16.326167 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:21:31 crc kubenswrapper[4767]: I0127 16:21:31.326052 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:21:31 crc kubenswrapper[4767]: E0127 16:21:31.326888 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:21:45 crc kubenswrapper[4767]: I0127 16:21:45.326014 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:21:45 crc kubenswrapper[4767]: E0127 16:21:45.326775 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:21:58 crc kubenswrapper[4767]: I0127 16:21:58.329417 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:21:58 crc kubenswrapper[4767]: E0127 16:21:58.330003 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:22:10 crc kubenswrapper[4767]: I0127 16:22:10.326597 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:22:10 crc kubenswrapper[4767]: E0127 16:22:10.327391 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:22:25 crc kubenswrapper[4767]: I0127 16:22:25.325314 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:22:25 crc kubenswrapper[4767]: E0127 16:22:25.325982 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:22:37 crc kubenswrapper[4767]: I0127 16:22:37.325883 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:22:37 crc kubenswrapper[4767]: E0127 16:22:37.327595 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:22:52 crc kubenswrapper[4767]: I0127 16:22:52.326084 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:22:52 crc kubenswrapper[4767]: E0127 16:22:52.326913 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:23:04 crc kubenswrapper[4767]: I0127 16:23:04.325290 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:23:04 crc kubenswrapper[4767]: I0127 16:23:04.951169 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818"} Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.696570 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:23:47 crc kubenswrapper[4767]: E0127 16:23:47.697450 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="extract-content" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.697467 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="extract-content" Jan 27 16:23:47 crc kubenswrapper[4767]: E0127 16:23:47.697486 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="registry-server" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.697492 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="registry-server" Jan 27 16:23:47 crc kubenswrapper[4767]: E0127 16:23:47.697500 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="extract-utilities" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.697507 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="extract-utilities" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.697632 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc963d8-57a3-4a96-b9bb-a3b96986a6e2" containerName="registry-server" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.698707 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.714618 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.874711 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.875269 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2sv\" (UniqueName: \"kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.875525 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.977107 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2sv\" (UniqueName: \"kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.977160 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.977232 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.977727 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:47 crc kubenswrapper[4767]: I0127 16:23:47.977779 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:48 crc kubenswrapper[4767]: I0127 16:23:48.009977 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2sv\" (UniqueName: \"kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv\") pod \"redhat-operators-dp8hg\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:48 crc kubenswrapper[4767]: I0127 16:23:48.039669 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:48 crc kubenswrapper[4767]: I0127 16:23:48.547600 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:23:49 crc kubenswrapper[4767]: I0127 16:23:49.314177 4767 generic.go:334] "Generic (PLEG): container finished" podID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerID="a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e" exitCode=0 Jan 27 16:23:49 crc kubenswrapper[4767]: I0127 16:23:49.314298 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerDied","Data":"a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e"} Jan 27 16:23:49 crc kubenswrapper[4767]: I0127 16:23:49.314479 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerStarted","Data":"a2a82789df71b5661501a014d94a2c99709ee0ea86406b451401d3f99f82b821"} Jan 27 16:23:49 crc kubenswrapper[4767]: I0127 16:23:49.316744 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:23:50 crc kubenswrapper[4767]: I0127 16:23:50.322164 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerStarted","Data":"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c"} Jan 27 16:23:51 crc kubenswrapper[4767]: I0127 16:23:51.331239 4767 generic.go:334] "Generic (PLEG): container finished" podID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerID="db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c" exitCode=0 Jan 27 16:23:51 crc kubenswrapper[4767]: I0127 16:23:51.331288 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerDied","Data":"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c"} Jan 27 16:23:53 crc kubenswrapper[4767]: I0127 16:23:53.346416 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerStarted","Data":"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9"} Jan 27 16:23:53 crc kubenswrapper[4767]: I0127 16:23:53.377809 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dp8hg" podStartSLOduration=2.711805818 podStartE2EDuration="6.377790492s" podCreationTimestamp="2026-01-27 16:23:47 +0000 UTC" firstStartedPulling="2026-01-27 16:23:49.316380824 +0000 UTC m=+2051.705398347" lastFinishedPulling="2026-01-27 16:23:52.982365498 +0000 UTC m=+2055.371383021" observedRunningTime="2026-01-27 16:23:53.375336133 +0000 UTC m=+2055.764353656" watchObservedRunningTime="2026-01-27 16:23:53.377790492 +0000 UTC m=+2055.766808015" Jan 27 16:23:58 crc kubenswrapper[4767]: I0127 16:23:58.040275 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:58 crc kubenswrapper[4767]: I0127 16:23:58.040875 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:23:59 crc kubenswrapper[4767]: I0127 16:23:59.084312 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dp8hg" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="registry-server" probeResult="failure" output=< Jan 27 16:23:59 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 16:23:59 crc kubenswrapper[4767]: > Jan 27 16:24:08 crc kubenswrapper[4767]: I0127 16:24:08.102665 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:24:08 crc kubenswrapper[4767]: I0127 16:24:08.149460 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:24:08 crc kubenswrapper[4767]: I0127 16:24:08.350578 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:24:09 crc kubenswrapper[4767]: I0127 16:24:09.492912 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dp8hg" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="registry-server" containerID="cri-o://79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9" gracePeriod=2 Jan 27 16:24:09 crc kubenswrapper[4767]: I0127 16:24:09.897544 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.032350 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities\") pod \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.032617 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content\") pod \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.032663 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2sv\" (UniqueName: \"kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv\") pod \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\" (UID: \"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94\") " Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.033588 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities" (OuterVolumeSpecName: "utilities") pod "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" (UID: "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.043405 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv" (OuterVolumeSpecName: "kube-api-access-zl2sv") pod "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" (UID: "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94"). InnerVolumeSpecName "kube-api-access-zl2sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.136529 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.136644 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2sv\" (UniqueName: \"kubernetes.io/projected/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-kube-api-access-zl2sv\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.172792 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" (UID: "e0ad4d72-68f5-4e00-a0ff-86896d4c7f94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.237916 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.504642 4767 generic.go:334] "Generic (PLEG): container finished" podID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerID="79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9" exitCode=0 Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.504729 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerDied","Data":"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9"} Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.504796 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8hg" event={"ID":"e0ad4d72-68f5-4e00-a0ff-86896d4c7f94","Type":"ContainerDied","Data":"a2a82789df71b5661501a014d94a2c99709ee0ea86406b451401d3f99f82b821"} Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.504792 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8hg" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.504881 4767 scope.go:117] "RemoveContainer" containerID="79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.539318 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.539944 4767 scope.go:117] "RemoveContainer" containerID="db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.555960 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dp8hg"] Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.564937 4767 scope.go:117] "RemoveContainer" containerID="a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.592752 4767 scope.go:117] "RemoveContainer" containerID="79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9" Jan 27 16:24:10 crc kubenswrapper[4767]: E0127 16:24:10.593574 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9\": container with ID starting with 79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9 not found: ID does not exist" containerID="79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.593615 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9"} err="failed to get container status \"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9\": rpc error: code = NotFound desc = could not find container \"79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9\": container with ID starting with 79f485c8dd03b286ba6b5fc8d1a65387287eb16cef3ffed3add0797b52f54da9 not found: ID does not exist" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.593650 4767 scope.go:117] "RemoveContainer" containerID="db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c" Jan 27 16:24:10 crc kubenswrapper[4767]: E0127 16:24:10.594237 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c\": container with ID starting with db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c not found: ID does not exist" containerID="db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.594279 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c"} err="failed to get container status \"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c\": rpc error: code = NotFound desc = could not find container \"db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c\": container with ID starting with db39978e8a218c02c1e270eda11f680233d619a10188027be3c24b6c096d6c3c not found: ID does not exist" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.594298 4767 scope.go:117] "RemoveContainer" containerID="a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e" Jan 27 16:24:10 crc kubenswrapper[4767]: E0127 16:24:10.594737 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e\": container with ID starting with a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e not found: ID does not exist" containerID="a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e" Jan 27 16:24:10 crc kubenswrapper[4767]: I0127 16:24:10.594796 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e"} err="failed to get container status \"a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e\": rpc error: code = NotFound desc = could not find container \"a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e\": container with ID starting with a41e65964665c20122e8b85d725ac626a82de721059b5bcf36d83c34f35b1c2e not found: ID does not exist" Jan 27 16:24:12 crc kubenswrapper[4767]: I0127 16:24:12.334114 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" path="/var/lib/kubelet/pods/e0ad4d72-68f5-4e00-a0ff-86896d4c7f94/volumes" Jan 27 16:25:24 crc kubenswrapper[4767]: I0127 16:25:24.857322 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:25:24 crc kubenswrapper[4767]: I0127 16:25:24.857772 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:25:54 crc kubenswrapper[4767]: I0127 16:25:54.857318 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:25:54 crc kubenswrapper[4767]: I0127 16:25:54.857932 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:26:24 crc kubenswrapper[4767]: I0127 16:26:24.857839 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:26:24 crc kubenswrapper[4767]: I0127 16:26:24.858462 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:26:24 crc kubenswrapper[4767]: I0127 16:26:24.858513 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:26:24 crc kubenswrapper[4767]: I0127 16:26:24.859116 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:26:24 crc kubenswrapper[4767]: I0127 16:26:24.859181 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818" gracePeriod=600 Jan 27 16:26:25 crc kubenswrapper[4767]: I0127 16:26:25.462302 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818" exitCode=0 Jan 27 16:26:25 crc kubenswrapper[4767]: I0127 16:26:25.462570 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818"} Jan 27 16:26:25 crc kubenswrapper[4767]: I0127 16:26:25.462597 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc"} Jan 27 16:26:25 crc kubenswrapper[4767]: I0127 16:26:25.462612 4767 scope.go:117] "RemoveContainer" containerID="513dfcb562f036d7c89d503a52afa4121d05435728728e7dc4cc0b38dbd7dddf" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.601955 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:04 crc kubenswrapper[4767]: E0127 16:28:04.603180 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="registry-server" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.603198 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="registry-server" Jan 27 16:28:04 crc kubenswrapper[4767]: E0127 16:28:04.603212 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="extract-content" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.603235 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="extract-content" Jan 27 16:28:04 crc kubenswrapper[4767]: E0127 16:28:04.603248 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="extract-utilities" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.603256 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="extract-utilities" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.603452 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ad4d72-68f5-4e00-a0ff-86896d4c7f94" containerName="registry-server" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.604814 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.646137 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.761368 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.761426 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.761487 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsq66\" (UniqueName: \"kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.862659 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsq66\" (UniqueName: \"kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.862753 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.862790 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.863326 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.863558 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.883139 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsq66\" (UniqueName: \"kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66\") pod \"community-operators-k8pks\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:04 crc kubenswrapper[4767]: I0127 16:28:04.939055 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:05 crc kubenswrapper[4767]: I0127 16:28:05.441365 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:06 crc kubenswrapper[4767]: I0127 16:28:06.225510 4767 generic.go:334] "Generic (PLEG): container finished" podID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerID="4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e" exitCode=0 Jan 27 16:28:06 crc kubenswrapper[4767]: I0127 16:28:06.225562 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerDied","Data":"4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e"} Jan 27 16:28:06 crc kubenswrapper[4767]: I0127 16:28:06.225799 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerStarted","Data":"e88c77bdf772738e5c43c73c5e1bb7f5603f15a0ae92aa4c30e8a9e70e73c98f"} Jan 27 16:28:07 crc kubenswrapper[4767]: I0127 16:28:07.234268 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerStarted","Data":"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181"} Jan 27 16:28:08 crc kubenswrapper[4767]: I0127 16:28:08.242881 4767 generic.go:334] "Generic (PLEG): container finished" podID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerID="55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181" exitCode=0 Jan 27 16:28:08 crc kubenswrapper[4767]: I0127 16:28:08.242971 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerDied","Data":"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181"} Jan 27 16:28:09 crc kubenswrapper[4767]: I0127 16:28:09.250501 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerStarted","Data":"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc"} Jan 27 16:28:09 crc kubenswrapper[4767]: I0127 16:28:09.270056 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k8pks" podStartSLOduration=2.827890575 podStartE2EDuration="5.27003833s" podCreationTimestamp="2026-01-27 16:28:04 +0000 UTC" firstStartedPulling="2026-01-27 16:28:06.226848099 +0000 UTC m=+2308.615865622" lastFinishedPulling="2026-01-27 16:28:08.668995854 +0000 UTC m=+2311.058013377" observedRunningTime="2026-01-27 16:28:09.265756519 +0000 UTC m=+2311.654774042" watchObservedRunningTime="2026-01-27 16:28:09.27003833 +0000 UTC m=+2311.659055853" Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.809896 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.811729 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.825534 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.991304 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.991366 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:12 crc kubenswrapper[4767]: I0127 16:28:12.991410 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmds\" (UniqueName: \"kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.092746 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.092808 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.092851 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbmds\" (UniqueName: \"kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.093303 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.093377 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.113888 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbmds\" (UniqueName: \"kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds\") pod \"certified-operators-rx2lz\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.137647 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:13 crc kubenswrapper[4767]: I0127 16:28:13.636351 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:13 crc kubenswrapper[4767]: W0127 16:28:13.644895 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode160a35b_4470_48d1_82db_0cd039bfaf9a.slice/crio-0cbef0b85eb4c8e3af551bed2d2cac27a54863912bb94b1869a72c45d422be04 WatchSource:0}: Error finding container 0cbef0b85eb4c8e3af551bed2d2cac27a54863912bb94b1869a72c45d422be04: Status 404 returned error can't find the container with id 0cbef0b85eb4c8e3af551bed2d2cac27a54863912bb94b1869a72c45d422be04 Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.285085 4767 generic.go:334] "Generic (PLEG): container finished" podID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerID="90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8" exitCode=0 Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.285150 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerDied","Data":"90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8"} Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.285223 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerStarted","Data":"0cbef0b85eb4c8e3af551bed2d2cac27a54863912bb94b1869a72c45d422be04"} Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.939344 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.939712 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:14 crc kubenswrapper[4767]: I0127 16:28:14.986242 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:15 crc kubenswrapper[4767]: I0127 16:28:15.293662 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerStarted","Data":"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c"} Jan 27 16:28:15 crc kubenswrapper[4767]: I0127 16:28:15.343977 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:16 crc kubenswrapper[4767]: I0127 16:28:16.301503 4767 generic.go:334] "Generic (PLEG): container finished" podID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerID="6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c" exitCode=0 Jan 27 16:28:16 crc kubenswrapper[4767]: I0127 16:28:16.301547 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerDied","Data":"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c"} Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.313385 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerStarted","Data":"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5"} Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.334803 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rx2lz" podStartSLOduration=2.89008644 podStartE2EDuration="5.334784326s" podCreationTimestamp="2026-01-27 16:28:12 +0000 UTC" firstStartedPulling="2026-01-27 16:28:14.28673993 +0000 UTC m=+2316.675757453" lastFinishedPulling="2026-01-27 16:28:16.731437816 +0000 UTC m=+2319.120455339" observedRunningTime="2026-01-27 16:28:17.329541228 +0000 UTC m=+2319.718558751" watchObservedRunningTime="2026-01-27 16:28:17.334784326 +0000 UTC m=+2319.723801849" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.379701 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.379936 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k8pks" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="registry-server" containerID="cri-o://7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc" gracePeriod=2 Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.799674 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.869123 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsq66\" (UniqueName: \"kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66\") pod \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.869168 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content\") pod \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.869191 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities\") pod \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\" (UID: \"ccb4899b-561b-4798-8c1e-932afdf3b1fd\") " Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.870030 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities" (OuterVolumeSpecName: "utilities") pod "ccb4899b-561b-4798-8c1e-932afdf3b1fd" (UID: "ccb4899b-561b-4798-8c1e-932afdf3b1fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.874498 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66" (OuterVolumeSpecName: "kube-api-access-qsq66") pod "ccb4899b-561b-4798-8c1e-932afdf3b1fd" (UID: "ccb4899b-561b-4798-8c1e-932afdf3b1fd"). InnerVolumeSpecName "kube-api-access-qsq66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.923656 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccb4899b-561b-4798-8c1e-932afdf3b1fd" (UID: "ccb4899b-561b-4798-8c1e-932afdf3b1fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.970348 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsq66\" (UniqueName: \"kubernetes.io/projected/ccb4899b-561b-4798-8c1e-932afdf3b1fd-kube-api-access-qsq66\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.970388 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:17 crc kubenswrapper[4767]: I0127 16:28:17.970403 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb4899b-561b-4798-8c1e-932afdf3b1fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.321809 4767 generic.go:334] "Generic (PLEG): container finished" podID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerID="7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc" exitCode=0 Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.321919 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerDied","Data":"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc"} Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.321962 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k8pks" event={"ID":"ccb4899b-561b-4798-8c1e-932afdf3b1fd","Type":"ContainerDied","Data":"e88c77bdf772738e5c43c73c5e1bb7f5603f15a0ae92aa4c30e8a9e70e73c98f"} Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.321980 4767 scope.go:117] "RemoveContainer" containerID="7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.322498 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k8pks" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.342517 4767 scope.go:117] "RemoveContainer" containerID="55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.368172 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.371563 4767 scope.go:117] "RemoveContainer" containerID="4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.377083 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k8pks"] Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.397530 4767 scope.go:117] "RemoveContainer" containerID="7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc" Jan 27 16:28:18 crc kubenswrapper[4767]: E0127 16:28:18.397964 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc\": container with ID starting with 7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc not found: ID does not exist" containerID="7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.398006 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc"} err="failed to get container status \"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc\": rpc error: code = NotFound desc = could not find container \"7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc\": container with ID starting with 7915628e5baa15d41bc6f24b6519e43842405e2d5e96e80f691e6823eb03bfcc not found: ID does not exist" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.398031 4767 scope.go:117] "RemoveContainer" containerID="55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181" Jan 27 16:28:18 crc kubenswrapper[4767]: E0127 16:28:18.400517 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181\": container with ID starting with 55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181 not found: ID does not exist" containerID="55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.400552 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181"} err="failed to get container status \"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181\": rpc error: code = NotFound desc = could not find container \"55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181\": container with ID starting with 55ddeaddb22464ecbcd897a185a267dc2ed34c903b5dd14c2334566c459ff181 not found: ID does not exist" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.400574 4767 scope.go:117] "RemoveContainer" containerID="4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e" Jan 27 16:28:18 crc kubenswrapper[4767]: E0127 16:28:18.401201 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e\": container with ID starting with 4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e not found: ID does not exist" containerID="4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e" Jan 27 16:28:18 crc kubenswrapper[4767]: I0127 16:28:18.401245 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e"} err="failed to get container status \"4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e\": rpc error: code = NotFound desc = could not find container \"4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e\": container with ID starting with 4ab3f710f54dd71bb0e15145e6ece1212d7fb367a006054023af06b1855d096e not found: ID does not exist" Jan 27 16:28:20 crc kubenswrapper[4767]: I0127 16:28:20.345929 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" path="/var/lib/kubelet/pods/ccb4899b-561b-4798-8c1e-932afdf3b1fd/volumes" Jan 27 16:28:23 crc kubenswrapper[4767]: I0127 16:28:23.138688 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:23 crc kubenswrapper[4767]: I0127 16:28:23.139608 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:23 crc kubenswrapper[4767]: I0127 16:28:23.181028 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:23 crc kubenswrapper[4767]: I0127 16:28:23.415264 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:23 crc kubenswrapper[4767]: I0127 16:28:23.459827 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.369417 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rx2lz" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="registry-server" containerID="cri-o://3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5" gracePeriod=2 Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.714001 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.877821 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbmds\" (UniqueName: \"kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds\") pod \"e160a35b-4470-48d1-82db-0cd039bfaf9a\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.877881 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities\") pod \"e160a35b-4470-48d1-82db-0cd039bfaf9a\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.877925 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content\") pod \"e160a35b-4470-48d1-82db-0cd039bfaf9a\" (UID: \"e160a35b-4470-48d1-82db-0cd039bfaf9a\") " Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.879683 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities" (OuterVolumeSpecName: "utilities") pod "e160a35b-4470-48d1-82db-0cd039bfaf9a" (UID: "e160a35b-4470-48d1-82db-0cd039bfaf9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.883429 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds" (OuterVolumeSpecName: "kube-api-access-cbmds") pod "e160a35b-4470-48d1-82db-0cd039bfaf9a" (UID: "e160a35b-4470-48d1-82db-0cd039bfaf9a"). InnerVolumeSpecName "kube-api-access-cbmds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.980145 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbmds\" (UniqueName: \"kubernetes.io/projected/e160a35b-4470-48d1-82db-0cd039bfaf9a-kube-api-access-cbmds\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:25 crc kubenswrapper[4767]: I0127 16:28:25.980189 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.082712 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e160a35b-4470-48d1-82db-0cd039bfaf9a" (UID: "e160a35b-4470-48d1-82db-0cd039bfaf9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.182070 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e160a35b-4470-48d1-82db-0cd039bfaf9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.377188 4767 generic.go:334] "Generic (PLEG): container finished" podID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerID="3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5" exitCode=0 Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.377251 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rx2lz" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.377256 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerDied","Data":"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5"} Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.377280 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rx2lz" event={"ID":"e160a35b-4470-48d1-82db-0cd039bfaf9a","Type":"ContainerDied","Data":"0cbef0b85eb4c8e3af551bed2d2cac27a54863912bb94b1869a72c45d422be04"} Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.377295 4767 scope.go:117] "RemoveContainer" containerID="3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.396837 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.402767 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rx2lz"] Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.404610 4767 scope.go:117] "RemoveContainer" containerID="6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.423820 4767 scope.go:117] "RemoveContainer" containerID="90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.444668 4767 scope.go:117] "RemoveContainer" containerID="3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5" Jan 27 16:28:26 crc kubenswrapper[4767]: E0127 16:28:26.445187 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5\": container with ID starting with 3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5 not found: ID does not exist" containerID="3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.445238 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5"} err="failed to get container status \"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5\": rpc error: code = NotFound desc = could not find container \"3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5\": container with ID starting with 3a3edf405171e84988e875d479a9f9d43ba43178043464e6c00eedfdf5d17ba5 not found: ID does not exist" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.445268 4767 scope.go:117] "RemoveContainer" containerID="6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c" Jan 27 16:28:26 crc kubenswrapper[4767]: E0127 16:28:26.445638 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c\": container with ID starting with 6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c not found: ID does not exist" containerID="6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.445663 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c"} err="failed to get container status \"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c\": rpc error: code = NotFound desc = could not find container \"6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c\": container with ID starting with 6976c9854230caf08a4680f07d39d237b413f43f5559e2e5fcef8aa1c1c50c9c not found: ID does not exist" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.445699 4767 scope.go:117] "RemoveContainer" containerID="90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8" Jan 27 16:28:26 crc kubenswrapper[4767]: E0127 16:28:26.445971 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8\": container with ID starting with 90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8 not found: ID does not exist" containerID="90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8" Jan 27 16:28:26 crc kubenswrapper[4767]: I0127 16:28:26.445999 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8"} err="failed to get container status \"90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8\": rpc error: code = NotFound desc = could not find container \"90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8\": container with ID starting with 90c612a8c5cf5e89ab642427d3434bae83b95c73c42e7d365975307b8fa3cec8 not found: ID does not exist" Jan 27 16:28:28 crc kubenswrapper[4767]: I0127 16:28:28.336194 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" path="/var/lib/kubelet/pods/e160a35b-4470-48d1-82db-0cd039bfaf9a/volumes" Jan 27 16:28:54 crc kubenswrapper[4767]: I0127 16:28:54.857901 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:28:54 crc kubenswrapper[4767]: I0127 16:28:54.859492 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:29:24 crc kubenswrapper[4767]: I0127 16:29:24.857422 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:29:24 crc kubenswrapper[4767]: I0127 16:29:24.857947 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:29:54 crc kubenswrapper[4767]: I0127 16:29:54.857276 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:29:54 crc kubenswrapper[4767]: I0127 16:29:54.857861 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:29:54 crc kubenswrapper[4767]: I0127 16:29:54.857919 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:29:54 crc kubenswrapper[4767]: I0127 16:29:54.858644 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:29:54 crc kubenswrapper[4767]: I0127 16:29:54.858712 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" gracePeriod=600 Jan 27 16:29:54 crc kubenswrapper[4767]: E0127 16:29:54.991284 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:29:55 crc kubenswrapper[4767]: I0127 16:29:55.175682 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" exitCode=0 Jan 27 16:29:55 crc kubenswrapper[4767]: I0127 16:29:55.175766 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc"} Jan 27 16:29:55 crc kubenswrapper[4767]: I0127 16:29:55.176108 4767 scope.go:117] "RemoveContainer" containerID="97d55ff553cc40bc63f7f2fd524b907d3f8b637a3ee5c3f1633b004859f0c818" Jan 27 16:29:55 crc kubenswrapper[4767]: I0127 16:29:55.177155 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:29:55 crc kubenswrapper[4767]: E0127 16:29:55.177467 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.141449 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr"] Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142127 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142147 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142167 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142175 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142190 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142246 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142259 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142265 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142287 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142294 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="extract-utilities" Jan 27 16:30:00 crc kubenswrapper[4767]: E0127 16:30:00.142310 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142317 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="extract-content" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142476 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e160a35b-4470-48d1-82db-0cd039bfaf9a" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.142493 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb4899b-561b-4798-8c1e-932afdf3b1fd" containerName="registry-server" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.143062 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.150899 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.155866 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.158170 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr"] Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.224858 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.224960 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.225173 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtkw\" (UniqueName: \"kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.326455 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.326512 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.326615 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtkw\" (UniqueName: \"kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.327397 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.338195 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.346159 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtkw\" (UniqueName: \"kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw\") pod \"collect-profiles-29492190-zdlnr\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.495219 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:00 crc kubenswrapper[4767]: I0127 16:30:00.899271 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr"] Jan 27 16:30:01 crc kubenswrapper[4767]: I0127 16:30:01.225134 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" event={"ID":"d0b64181-3546-4e58-9aeb-2b832dd80a1c","Type":"ContainerStarted","Data":"70e915370cab6f95e88757e1ff001304e2070d2b69aa437c3373fdf371fa85a5"} Jan 27 16:30:01 crc kubenswrapper[4767]: I0127 16:30:01.225547 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" event={"ID":"d0b64181-3546-4e58-9aeb-2b832dd80a1c","Type":"ContainerStarted","Data":"8d0eaeb088aa2cc1741d971961bac71b55c324c9f8c944d7775c15a9b67ff388"} Jan 27 16:30:01 crc kubenswrapper[4767]: I0127 16:30:01.242164 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" podStartSLOduration=1.242147436 podStartE2EDuration="1.242147436s" podCreationTimestamp="2026-01-27 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:30:01.238442792 +0000 UTC m=+2423.627460315" watchObservedRunningTime="2026-01-27 16:30:01.242147436 +0000 UTC m=+2423.631164949" Jan 27 16:30:02 crc kubenswrapper[4767]: I0127 16:30:02.238497 4767 generic.go:334] "Generic (PLEG): container finished" podID="d0b64181-3546-4e58-9aeb-2b832dd80a1c" containerID="70e915370cab6f95e88757e1ff001304e2070d2b69aa437c3373fdf371fa85a5" exitCode=0 Jan 27 16:30:02 crc kubenswrapper[4767]: I0127 16:30:02.238545 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" event={"ID":"d0b64181-3546-4e58-9aeb-2b832dd80a1c","Type":"ContainerDied","Data":"70e915370cab6f95e88757e1ff001304e2070d2b69aa437c3373fdf371fa85a5"} Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.500698 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.570439 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume\") pod \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.570512 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtkw\" (UniqueName: \"kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw\") pod \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.570620 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume\") pod \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\" (UID: \"d0b64181-3546-4e58-9aeb-2b832dd80a1c\") " Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.571957 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d0b64181-3546-4e58-9aeb-2b832dd80a1c" (UID: "d0b64181-3546-4e58-9aeb-2b832dd80a1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.577213 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw" (OuterVolumeSpecName: "kube-api-access-vwtkw") pod "d0b64181-3546-4e58-9aeb-2b832dd80a1c" (UID: "d0b64181-3546-4e58-9aeb-2b832dd80a1c"). InnerVolumeSpecName "kube-api-access-vwtkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.577357 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d0b64181-3546-4e58-9aeb-2b832dd80a1c" (UID: "d0b64181-3546-4e58-9aeb-2b832dd80a1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.671863 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b64181-3546-4e58-9aeb-2b832dd80a1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.671903 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwtkw\" (UniqueName: \"kubernetes.io/projected/d0b64181-3546-4e58-9aeb-2b832dd80a1c-kube-api-access-vwtkw\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:03 crc kubenswrapper[4767]: I0127 16:30:03.671920 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d0b64181-3546-4e58-9aeb-2b832dd80a1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:30:04 crc kubenswrapper[4767]: I0127 16:30:04.255227 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" event={"ID":"d0b64181-3546-4e58-9aeb-2b832dd80a1c","Type":"ContainerDied","Data":"8d0eaeb088aa2cc1741d971961bac71b55c324c9f8c944d7775c15a9b67ff388"} Jan 27 16:30:04 crc kubenswrapper[4767]: I0127 16:30:04.255528 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0eaeb088aa2cc1741d971961bac71b55c324c9f8c944d7775c15a9b67ff388" Jan 27 16:30:04 crc kubenswrapper[4767]: I0127 16:30:04.255642 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr" Jan 27 16:30:04 crc kubenswrapper[4767]: I0127 16:30:04.320085 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw"] Jan 27 16:30:04 crc kubenswrapper[4767]: I0127 16:30:04.336995 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-4vjsw"] Jan 27 16:30:06 crc kubenswrapper[4767]: I0127 16:30:06.336008 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3e72cd-0745-4a8e-b3b5-25d23bccaa1c" path="/var/lib/kubelet/pods/df3e72cd-0745-4a8e-b3b5-25d23bccaa1c/volumes" Jan 27 16:30:08 crc kubenswrapper[4767]: I0127 16:30:08.330230 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:30:08 crc kubenswrapper[4767]: E0127 16:30:08.330870 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:30:20 crc kubenswrapper[4767]: I0127 16:30:20.325312 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:30:20 crc kubenswrapper[4767]: E0127 16:30:20.326168 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:30:35 crc kubenswrapper[4767]: I0127 16:30:35.325959 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:30:35 crc kubenswrapper[4767]: E0127 16:30:35.327022 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:30:42 crc kubenswrapper[4767]: I0127 16:30:42.449990 4767 scope.go:117] "RemoveContainer" containerID="7a5d02e78be533a25699f9ad1f67bf9596656a6db315be805ceacceb5b1f5507" Jan 27 16:30:50 crc kubenswrapper[4767]: I0127 16:30:50.326013 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:30:50 crc kubenswrapper[4767]: E0127 16:30:50.326751 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:31:04 crc kubenswrapper[4767]: I0127 16:31:04.325341 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:31:04 crc kubenswrapper[4767]: E0127 16:31:04.326001 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:31:18 crc kubenswrapper[4767]: I0127 16:31:18.329499 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:31:18 crc kubenswrapper[4767]: E0127 16:31:18.330327 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:31:33 crc kubenswrapper[4767]: I0127 16:31:33.328763 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:31:33 crc kubenswrapper[4767]: E0127 16:31:33.329721 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.002518 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:39 crc kubenswrapper[4767]: E0127 16:31:39.003378 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b64181-3546-4e58-9aeb-2b832dd80a1c" containerName="collect-profiles" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.003393 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b64181-3546-4e58-9aeb-2b832dd80a1c" containerName="collect-profiles" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.003531 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b64181-3546-4e58-9aeb-2b832dd80a1c" containerName="collect-profiles" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.004552 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.034786 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.108510 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9wt\" (UniqueName: \"kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.108608 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.108649 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.209378 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9wt\" (UniqueName: \"kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.209491 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.209985 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.210038 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.210036 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.236683 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9wt\" (UniqueName: \"kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt\") pod \"redhat-marketplace-bbvmn\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.327601 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.775436 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:39 crc kubenswrapper[4767]: I0127 16:31:39.923389 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerStarted","Data":"a8aa52541ca4f5963308dedb0e1b5873dd7b24b9e04ca8139fd796cb891502d3"} Jan 27 16:31:40 crc kubenswrapper[4767]: I0127 16:31:40.934149 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerDied","Data":"224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011"} Jan 27 16:31:40 crc kubenswrapper[4767]: I0127 16:31:40.934360 4767 generic.go:334] "Generic (PLEG): container finished" podID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerID="224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011" exitCode=0 Jan 27 16:31:40 crc kubenswrapper[4767]: I0127 16:31:40.936321 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:31:42 crc kubenswrapper[4767]: I0127 16:31:42.947545 4767 generic.go:334] "Generic (PLEG): container finished" podID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerID="cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d" exitCode=0 Jan 27 16:31:42 crc kubenswrapper[4767]: I0127 16:31:42.947622 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerDied","Data":"cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d"} Jan 27 16:31:43 crc kubenswrapper[4767]: I0127 16:31:43.958238 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerStarted","Data":"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05"} Jan 27 16:31:43 crc kubenswrapper[4767]: I0127 16:31:43.977948 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bbvmn" podStartSLOduration=3.518815326 podStartE2EDuration="5.977931085s" podCreationTimestamp="2026-01-27 16:31:38 +0000 UTC" firstStartedPulling="2026-01-27 16:31:40.93591539 +0000 UTC m=+2523.324932923" lastFinishedPulling="2026-01-27 16:31:43.395031159 +0000 UTC m=+2525.784048682" observedRunningTime="2026-01-27 16:31:43.974554631 +0000 UTC m=+2526.363572154" watchObservedRunningTime="2026-01-27 16:31:43.977931085 +0000 UTC m=+2526.366948608" Jan 27 16:31:44 crc kubenswrapper[4767]: I0127 16:31:44.326517 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:31:44 crc kubenswrapper[4767]: E0127 16:31:44.326877 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:31:49 crc kubenswrapper[4767]: I0127 16:31:49.327778 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:49 crc kubenswrapper[4767]: I0127 16:31:49.328161 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:49 crc kubenswrapper[4767]: I0127 16:31:49.378581 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:50 crc kubenswrapper[4767]: I0127 16:31:50.048520 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:50 crc kubenswrapper[4767]: I0127 16:31:50.106502 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.017604 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bbvmn" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="registry-server" containerID="cri-o://cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05" gracePeriod=2 Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.468681 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.625348 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content\") pod \"5a6d5791-17c0-432e-a414-d0291ab1cf56\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.625473 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities\") pod \"5a6d5791-17c0-432e-a414-d0291ab1cf56\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.625532 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9wt\" (UniqueName: \"kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt\") pod \"5a6d5791-17c0-432e-a414-d0291ab1cf56\" (UID: \"5a6d5791-17c0-432e-a414-d0291ab1cf56\") " Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.626631 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities" (OuterVolumeSpecName: "utilities") pod "5a6d5791-17c0-432e-a414-d0291ab1cf56" (UID: "5a6d5791-17c0-432e-a414-d0291ab1cf56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.637484 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt" (OuterVolumeSpecName: "kube-api-access-4n9wt") pod "5a6d5791-17c0-432e-a414-d0291ab1cf56" (UID: "5a6d5791-17c0-432e-a414-d0291ab1cf56"). InnerVolumeSpecName "kube-api-access-4n9wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.667548 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a6d5791-17c0-432e-a414-d0291ab1cf56" (UID: "5a6d5791-17c0-432e-a414-d0291ab1cf56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.727300 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.727340 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6d5791-17c0-432e-a414-d0291ab1cf56-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:52 crc kubenswrapper[4767]: I0127 16:31:52.727357 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9wt\" (UniqueName: \"kubernetes.io/projected/5a6d5791-17c0-432e-a414-d0291ab1cf56-kube-api-access-4n9wt\") on node \"crc\" DevicePath \"\"" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.040086 4767 generic.go:334] "Generic (PLEG): container finished" podID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerID="cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05" exitCode=0 Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.040163 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerDied","Data":"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05"} Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.040217 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbvmn" event={"ID":"5a6d5791-17c0-432e-a414-d0291ab1cf56","Type":"ContainerDied","Data":"a8aa52541ca4f5963308dedb0e1b5873dd7b24b9e04ca8139fd796cb891502d3"} Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.040254 4767 scope.go:117] "RemoveContainer" containerID="cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.040499 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbvmn" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.062869 4767 scope.go:117] "RemoveContainer" containerID="cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.095564 4767 scope.go:117] "RemoveContainer" containerID="224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.102772 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.108310 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbvmn"] Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.131770 4767 scope.go:117] "RemoveContainer" containerID="cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05" Jan 27 16:31:53 crc kubenswrapper[4767]: E0127 16:31:53.132428 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05\": container with ID starting with cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05 not found: ID does not exist" containerID="cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.132499 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05"} err="failed to get container status \"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05\": rpc error: code = NotFound desc = could not find container \"cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05\": container with ID starting with cddb68bafeb2a873316a600a6e18086869afb36eb4517e45651f1d65cd0a7f05 not found: ID does not exist" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.132548 4767 scope.go:117] "RemoveContainer" containerID="cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d" Jan 27 16:31:53 crc kubenswrapper[4767]: E0127 16:31:53.132949 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d\": container with ID starting with cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d not found: ID does not exist" containerID="cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.132991 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d"} err="failed to get container status \"cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d\": rpc error: code = NotFound desc = could not find container \"cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d\": container with ID starting with cd49ef156603627d824661f5e7e6295835a225d6d7a2cd09ad66ad63a532da0d not found: ID does not exist" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.133020 4767 scope.go:117] "RemoveContainer" containerID="224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011" Jan 27 16:31:53 crc kubenswrapper[4767]: E0127 16:31:53.134608 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011\": container with ID starting with 224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011 not found: ID does not exist" containerID="224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011" Jan 27 16:31:53 crc kubenswrapper[4767]: I0127 16:31:53.134646 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011"} err="failed to get container status \"224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011\": rpc error: code = NotFound desc = could not find container \"224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011\": container with ID starting with 224299bedc107070794d0bfba20c2e2ef83f589b856152436effc2cfdfb14011 not found: ID does not exist" Jan 27 16:31:54 crc kubenswrapper[4767]: I0127 16:31:54.332606 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" path="/var/lib/kubelet/pods/5a6d5791-17c0-432e-a414-d0291ab1cf56/volumes" Jan 27 16:31:57 crc kubenswrapper[4767]: I0127 16:31:57.326654 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:31:57 crc kubenswrapper[4767]: E0127 16:31:57.326905 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:32:10 crc kubenswrapper[4767]: I0127 16:32:10.327613 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:32:10 crc kubenswrapper[4767]: E0127 16:32:10.328561 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:32:25 crc kubenswrapper[4767]: I0127 16:32:25.326361 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:32:25 crc kubenswrapper[4767]: E0127 16:32:25.327603 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:32:36 crc kubenswrapper[4767]: I0127 16:32:36.325969 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:32:36 crc kubenswrapper[4767]: E0127 16:32:36.327550 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:32:49 crc kubenswrapper[4767]: I0127 16:32:49.324992 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:32:49 crc kubenswrapper[4767]: E0127 16:32:49.325710 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:33:01 crc kubenswrapper[4767]: I0127 16:33:01.326220 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:33:01 crc kubenswrapper[4767]: E0127 16:33:01.326998 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:33:16 crc kubenswrapper[4767]: I0127 16:33:16.325694 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:33:16 crc kubenswrapper[4767]: E0127 16:33:16.326518 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:33:30 crc kubenswrapper[4767]: I0127 16:33:30.325354 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:33:30 crc kubenswrapper[4767]: E0127 16:33:30.326104 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:33:43 crc kubenswrapper[4767]: I0127 16:33:43.326541 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:33:43 crc kubenswrapper[4767]: E0127 16:33:43.327222 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:33:54 crc kubenswrapper[4767]: I0127 16:33:54.326221 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:33:54 crc kubenswrapper[4767]: E0127 16:33:54.327036 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.620267 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:00 crc kubenswrapper[4767]: E0127 16:34:00.621299 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="extract-content" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.621314 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="extract-content" Jan 27 16:34:00 crc kubenswrapper[4767]: E0127 16:34:00.621335 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="registry-server" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.621342 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="registry-server" Jan 27 16:34:00 crc kubenswrapper[4767]: E0127 16:34:00.621364 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="extract-utilities" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.621372 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="extract-utilities" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.621549 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6d5791-17c0-432e-a414-d0291ab1cf56" containerName="registry-server" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.622822 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.624045 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.639680 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.639869 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957r2\" (UniqueName: \"kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.639924 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.740908 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.740984 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.741068 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957r2\" (UniqueName: \"kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.741620 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.741663 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.763305 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957r2\" (UniqueName: \"kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2\") pod \"redhat-operators-t2c56\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:00 crc kubenswrapper[4767]: I0127 16:34:00.947353 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:01 crc kubenswrapper[4767]: I0127 16:34:01.382122 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:01 crc kubenswrapper[4767]: I0127 16:34:01.997487 4767 generic.go:334] "Generic (PLEG): container finished" podID="729f75b5-4267-4772-a13e-531188a1e9b1" containerID="c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f" exitCode=0 Jan 27 16:34:01 crc kubenswrapper[4767]: I0127 16:34:01.997586 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerDied","Data":"c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f"} Jan 27 16:34:01 crc kubenswrapper[4767]: I0127 16:34:01.997903 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerStarted","Data":"da19b892dac372524e4e63028e09d1a9040c336b154639fb94fd63ad2d689fc2"} Jan 27 16:34:03 crc kubenswrapper[4767]: I0127 16:34:03.007446 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerStarted","Data":"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082"} Jan 27 16:34:04 crc kubenswrapper[4767]: I0127 16:34:04.016885 4767 generic.go:334] "Generic (PLEG): container finished" podID="729f75b5-4267-4772-a13e-531188a1e9b1" containerID="d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082" exitCode=0 Jan 27 16:34:04 crc kubenswrapper[4767]: I0127 16:34:04.016937 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerDied","Data":"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082"} Jan 27 16:34:05 crc kubenswrapper[4767]: I0127 16:34:05.030074 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerStarted","Data":"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f"} Jan 27 16:34:05 crc kubenswrapper[4767]: I0127 16:34:05.049301 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2c56" podStartSLOduration=2.359615008 podStartE2EDuration="5.049277013s" podCreationTimestamp="2026-01-27 16:34:00 +0000 UTC" firstStartedPulling="2026-01-27 16:34:01.99993095 +0000 UTC m=+2664.388948473" lastFinishedPulling="2026-01-27 16:34:04.689592955 +0000 UTC m=+2667.078610478" observedRunningTime="2026-01-27 16:34:05.047918045 +0000 UTC m=+2667.436935578" watchObservedRunningTime="2026-01-27 16:34:05.049277013 +0000 UTC m=+2667.438294556" Jan 27 16:34:06 crc kubenswrapper[4767]: I0127 16:34:06.325596 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:34:06 crc kubenswrapper[4767]: E0127 16:34:06.326306 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:34:10 crc kubenswrapper[4767]: I0127 16:34:10.948393 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:10 crc kubenswrapper[4767]: I0127 16:34:10.948763 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:10 crc kubenswrapper[4767]: I0127 16:34:10.998470 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:11 crc kubenswrapper[4767]: I0127 16:34:11.122989 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:11 crc kubenswrapper[4767]: I0127 16:34:11.242639 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.081001 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2c56" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="registry-server" containerID="cri-o://480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f" gracePeriod=2 Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.472369 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.644573 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content\") pod \"729f75b5-4267-4772-a13e-531188a1e9b1\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.644701 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-957r2\" (UniqueName: \"kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2\") pod \"729f75b5-4267-4772-a13e-531188a1e9b1\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.644736 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities\") pod \"729f75b5-4267-4772-a13e-531188a1e9b1\" (UID: \"729f75b5-4267-4772-a13e-531188a1e9b1\") " Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.645790 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities" (OuterVolumeSpecName: "utilities") pod "729f75b5-4267-4772-a13e-531188a1e9b1" (UID: "729f75b5-4267-4772-a13e-531188a1e9b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.653389 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2" (OuterVolumeSpecName: "kube-api-access-957r2") pod "729f75b5-4267-4772-a13e-531188a1e9b1" (UID: "729f75b5-4267-4772-a13e-531188a1e9b1"). InnerVolumeSpecName "kube-api-access-957r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.746393 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-957r2\" (UniqueName: \"kubernetes.io/projected/729f75b5-4267-4772-a13e-531188a1e9b1-kube-api-access-957r2\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:13 crc kubenswrapper[4767]: I0127 16:34:13.746438 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.088807 4767 generic.go:334] "Generic (PLEG): container finished" podID="729f75b5-4267-4772-a13e-531188a1e9b1" containerID="480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f" exitCode=0 Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.088858 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerDied","Data":"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f"} Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.088919 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2c56" event={"ID":"729f75b5-4267-4772-a13e-531188a1e9b1","Type":"ContainerDied","Data":"da19b892dac372524e4e63028e09d1a9040c336b154639fb94fd63ad2d689fc2"} Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.088941 4767 scope.go:117] "RemoveContainer" containerID="480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.088872 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2c56" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.111065 4767 scope.go:117] "RemoveContainer" containerID="d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.128483 4767 scope.go:117] "RemoveContainer" containerID="c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.152922 4767 scope.go:117] "RemoveContainer" containerID="480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f" Jan 27 16:34:14 crc kubenswrapper[4767]: E0127 16:34:14.153482 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f\": container with ID starting with 480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f not found: ID does not exist" containerID="480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.153540 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f"} err="failed to get container status \"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f\": rpc error: code = NotFound desc = could not find container \"480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f\": container with ID starting with 480029ee9aa712aa7f62edbc4d7ac4a7ba81719d8a4b9a50405e439600afa02f not found: ID does not exist" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.153571 4767 scope.go:117] "RemoveContainer" containerID="d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082" Jan 27 16:34:14 crc kubenswrapper[4767]: E0127 16:34:14.154073 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082\": container with ID starting with d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082 not found: ID does not exist" containerID="d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.154107 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082"} err="failed to get container status \"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082\": rpc error: code = NotFound desc = could not find container \"d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082\": container with ID starting with d10bfba6d8aa936fce94957889473e605b509f239961e18864eef1eede305082 not found: ID does not exist" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.154138 4767 scope.go:117] "RemoveContainer" containerID="c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f" Jan 27 16:34:14 crc kubenswrapper[4767]: E0127 16:34:14.154442 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f\": container with ID starting with c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f not found: ID does not exist" containerID="c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f" Jan 27 16:34:14 crc kubenswrapper[4767]: I0127 16:34:14.154479 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f"} err="failed to get container status \"c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f\": rpc error: code = NotFound desc = could not find container \"c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f\": container with ID starting with c9ffd937a91fbd10a288f01861e6e385aebf62746e9f070fec3e479c81e2077f not found: ID does not exist" Jan 27 16:34:15 crc kubenswrapper[4767]: I0127 16:34:15.900899 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "729f75b5-4267-4772-a13e-531188a1e9b1" (UID: "729f75b5-4267-4772-a13e-531188a1e9b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:34:15 crc kubenswrapper[4767]: I0127 16:34:15.986376 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729f75b5-4267-4772-a13e-531188a1e9b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:34:16 crc kubenswrapper[4767]: I0127 16:34:16.225724 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:16 crc kubenswrapper[4767]: I0127 16:34:16.231300 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2c56"] Jan 27 16:34:16 crc kubenswrapper[4767]: I0127 16:34:16.334288 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" path="/var/lib/kubelet/pods/729f75b5-4267-4772-a13e-531188a1e9b1/volumes" Jan 27 16:34:20 crc kubenswrapper[4767]: I0127 16:34:20.325766 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:34:20 crc kubenswrapper[4767]: E0127 16:34:20.326178 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:34:33 crc kubenswrapper[4767]: I0127 16:34:33.325292 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:34:33 crc kubenswrapper[4767]: E0127 16:34:33.326172 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:34:48 crc kubenswrapper[4767]: I0127 16:34:48.330028 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:34:48 crc kubenswrapper[4767]: E0127 16:34:48.330877 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:35:01 crc kubenswrapper[4767]: I0127 16:35:01.325092 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:35:02 crc kubenswrapper[4767]: I0127 16:35:02.485690 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1"} Jan 27 16:37:24 crc kubenswrapper[4767]: I0127 16:37:24.857547 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:37:24 crc kubenswrapper[4767]: I0127 16:37:24.858157 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:37:54 crc kubenswrapper[4767]: I0127 16:37:54.857681 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:37:54 crc kubenswrapper[4767]: I0127 16:37:54.858381 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:38:24 crc kubenswrapper[4767]: I0127 16:38:24.857756 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:38:24 crc kubenswrapper[4767]: I0127 16:38:24.858336 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:38:24 crc kubenswrapper[4767]: I0127 16:38:24.858388 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:38:24 crc kubenswrapper[4767]: I0127 16:38:24.859086 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:38:24 crc kubenswrapper[4767]: I0127 16:38:24.859150 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1" gracePeriod=600 Jan 27 16:38:25 crc kubenswrapper[4767]: I0127 16:38:25.036280 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1" exitCode=0 Jan 27 16:38:25 crc kubenswrapper[4767]: I0127 16:38:25.036329 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1"} Jan 27 16:38:25 crc kubenswrapper[4767]: I0127 16:38:25.036399 4767 scope.go:117] "RemoveContainer" containerID="e35817e063cc397490bb801dff21c4f09225d298ab6e477d8a2dbdc087deb2dc" Jan 27 16:38:26 crc kubenswrapper[4767]: I0127 16:38:26.055165 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1"} Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.899163 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:34 crc kubenswrapper[4767]: E0127 16:38:34.900285 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="extract-utilities" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.900302 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="extract-utilities" Jan 27 16:38:34 crc kubenswrapper[4767]: E0127 16:38:34.900315 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="extract-content" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.900322 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="extract-content" Jan 27 16:38:34 crc kubenswrapper[4767]: E0127 16:38:34.900336 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="registry-server" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.900343 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="registry-server" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.900531 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="729f75b5-4267-4772-a13e-531188a1e9b1" containerName="registry-server" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.902016 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:34 crc kubenswrapper[4767]: I0127 16:38:34.935836 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.074900 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cjj\" (UniqueName: \"kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.074946 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.075000 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.177087 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4cjj\" (UniqueName: \"kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.177158 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.177288 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.178095 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.178250 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.208419 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4cjj\" (UniqueName: \"kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj\") pod \"certified-operators-7nhhj\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.218921 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:35 crc kubenswrapper[4767]: I0127 16:38:35.731786 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:36 crc kubenswrapper[4767]: I0127 16:38:36.130536 4767 generic.go:334] "Generic (PLEG): container finished" podID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerID="58dfd659fb83fd3d68bea86401487024e8ffd25e38ed0e8640c83c8009a564bf" exitCode=0 Jan 27 16:38:36 crc kubenswrapper[4767]: I0127 16:38:36.130679 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerDied","Data":"58dfd659fb83fd3d68bea86401487024e8ffd25e38ed0e8640c83c8009a564bf"} Jan 27 16:38:36 crc kubenswrapper[4767]: I0127 16:38:36.131364 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerStarted","Data":"d59c5e188c5d61eafbfb71773a1856da03eca8f73461601a7c1471f4e436c7a1"} Jan 27 16:38:36 crc kubenswrapper[4767]: I0127 16:38:36.135191 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:38:38 crc kubenswrapper[4767]: I0127 16:38:38.154230 4767 generic.go:334] "Generic (PLEG): container finished" podID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerID="6b64c85ef26bf3a14995239fb9f5884e0a2ebe5fc6be486ae2a4149225a54319" exitCode=0 Jan 27 16:38:38 crc kubenswrapper[4767]: I0127 16:38:38.154266 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerDied","Data":"6b64c85ef26bf3a14995239fb9f5884e0a2ebe5fc6be486ae2a4149225a54319"} Jan 27 16:38:39 crc kubenswrapper[4767]: I0127 16:38:39.165737 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerStarted","Data":"d85ab0babb38329fa147de8372a5ca1e801f96b62af6c7389ff105f79084789d"} Jan 27 16:38:39 crc kubenswrapper[4767]: I0127 16:38:39.189803 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7nhhj" podStartSLOduration=2.7113617679999997 podStartE2EDuration="5.189781468s" podCreationTimestamp="2026-01-27 16:38:34 +0000 UTC" firstStartedPulling="2026-01-27 16:38:36.132943667 +0000 UTC m=+2938.521961190" lastFinishedPulling="2026-01-27 16:38:38.611363367 +0000 UTC m=+2941.000380890" observedRunningTime="2026-01-27 16:38:39.189604993 +0000 UTC m=+2941.578622516" watchObservedRunningTime="2026-01-27 16:38:39.189781468 +0000 UTC m=+2941.578799001" Jan 27 16:38:45 crc kubenswrapper[4767]: I0127 16:38:45.219924 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:45 crc kubenswrapper[4767]: I0127 16:38:45.220754 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:45 crc kubenswrapper[4767]: I0127 16:38:45.297978 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:46 crc kubenswrapper[4767]: I0127 16:38:46.274508 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:46 crc kubenswrapper[4767]: I0127 16:38:46.349084 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:48 crc kubenswrapper[4767]: I0127 16:38:48.359885 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7nhhj" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="registry-server" containerID="cri-o://d85ab0babb38329fa147de8372a5ca1e801f96b62af6c7389ff105f79084789d" gracePeriod=2 Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.369142 4767 generic.go:334] "Generic (PLEG): container finished" podID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerID="d85ab0babb38329fa147de8372a5ca1e801f96b62af6c7389ff105f79084789d" exitCode=0 Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.369216 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerDied","Data":"d85ab0babb38329fa147de8372a5ca1e801f96b62af6c7389ff105f79084789d"} Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.459003 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.559577 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content\") pod \"b9e78241-2f3b-4919-ab94-418121d1d84e\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.559944 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities\") pod \"b9e78241-2f3b-4919-ab94-418121d1d84e\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.560148 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4cjj\" (UniqueName: \"kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj\") pod \"b9e78241-2f3b-4919-ab94-418121d1d84e\" (UID: \"b9e78241-2f3b-4919-ab94-418121d1d84e\") " Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.561092 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities" (OuterVolumeSpecName: "utilities") pod "b9e78241-2f3b-4919-ab94-418121d1d84e" (UID: "b9e78241-2f3b-4919-ab94-418121d1d84e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.569566 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj" (OuterVolumeSpecName: "kube-api-access-v4cjj") pod "b9e78241-2f3b-4919-ab94-418121d1d84e" (UID: "b9e78241-2f3b-4919-ab94-418121d1d84e"). InnerVolumeSpecName "kube-api-access-v4cjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.607538 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9e78241-2f3b-4919-ab94-418121d1d84e" (UID: "b9e78241-2f3b-4919-ab94-418121d1d84e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.661731 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4cjj\" (UniqueName: \"kubernetes.io/projected/b9e78241-2f3b-4919-ab94-418121d1d84e-kube-api-access-v4cjj\") on node \"crc\" DevicePath \"\"" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.661776 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:38:49 crc kubenswrapper[4767]: I0127 16:38:49.661789 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9e78241-2f3b-4919-ab94-418121d1d84e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.379155 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nhhj" event={"ID":"b9e78241-2f3b-4919-ab94-418121d1d84e","Type":"ContainerDied","Data":"d59c5e188c5d61eafbfb71773a1856da03eca8f73461601a7c1471f4e436c7a1"} Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.379251 4767 scope.go:117] "RemoveContainer" containerID="d85ab0babb38329fa147de8372a5ca1e801f96b62af6c7389ff105f79084789d" Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.379435 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nhhj" Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.412365 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.413836 4767 scope.go:117] "RemoveContainer" containerID="6b64c85ef26bf3a14995239fb9f5884e0a2ebe5fc6be486ae2a4149225a54319" Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.418669 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7nhhj"] Jan 27 16:38:50 crc kubenswrapper[4767]: I0127 16:38:50.433413 4767 scope.go:117] "RemoveContainer" containerID="58dfd659fb83fd3d68bea86401487024e8ffd25e38ed0e8640c83c8009a564bf" Jan 27 16:38:52 crc kubenswrapper[4767]: I0127 16:38:52.333834 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" path="/var/lib/kubelet/pods/b9e78241-2f3b-4919-ab94-418121d1d84e/volumes" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.324319 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:14 crc kubenswrapper[4767]: E0127 16:39:14.325585 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="extract-utilities" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.325615 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="extract-utilities" Jan 27 16:39:14 crc kubenswrapper[4767]: E0127 16:39:14.325720 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="extract-content" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.325742 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="extract-content" Jan 27 16:39:14 crc kubenswrapper[4767]: E0127 16:39:14.325787 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="registry-server" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.325803 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="registry-server" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.326148 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e78241-2f3b-4919-ab94-418121d1d84e" containerName="registry-server" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.328883 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.346836 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.424232 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.424356 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.424472 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.526186 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.526241 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.526271 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.526703 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.526791 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.548082 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn\") pod \"community-operators-b95c2\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:14 crc kubenswrapper[4767]: I0127 16:39:14.655827 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:15 crc kubenswrapper[4767]: I0127 16:39:15.134241 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:15 crc kubenswrapper[4767]: I0127 16:39:15.599054 4767 generic.go:334] "Generic (PLEG): container finished" podID="bae258ab-2734-4477-94bf-9c97269b44b7" containerID="bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645" exitCode=0 Jan 27 16:39:15 crc kubenswrapper[4767]: I0127 16:39:15.599116 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerDied","Data":"bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645"} Jan 27 16:39:15 crc kubenswrapper[4767]: I0127 16:39:15.599154 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerStarted","Data":"3bf3b7806fe8f73d10ff4bf1fcca1e103d0387c96ad2b1cefe05efc95ec2f424"} Jan 27 16:39:17 crc kubenswrapper[4767]: I0127 16:39:17.620777 4767 generic.go:334] "Generic (PLEG): container finished" podID="bae258ab-2734-4477-94bf-9c97269b44b7" containerID="06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7" exitCode=0 Jan 27 16:39:17 crc kubenswrapper[4767]: I0127 16:39:17.620888 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerDied","Data":"06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7"} Jan 27 16:39:18 crc kubenswrapper[4767]: I0127 16:39:18.633604 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerStarted","Data":"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4"} Jan 27 16:39:18 crc kubenswrapper[4767]: I0127 16:39:18.660028 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b95c2" podStartSLOduration=2.103660242 podStartE2EDuration="4.660008975s" podCreationTimestamp="2026-01-27 16:39:14 +0000 UTC" firstStartedPulling="2026-01-27 16:39:15.601286401 +0000 UTC m=+2977.990303954" lastFinishedPulling="2026-01-27 16:39:18.157635164 +0000 UTC m=+2980.546652687" observedRunningTime="2026-01-27 16:39:18.65805668 +0000 UTC m=+2981.047074213" watchObservedRunningTime="2026-01-27 16:39:18.660008975 +0000 UTC m=+2981.049026498" Jan 27 16:39:24 crc kubenswrapper[4767]: I0127 16:39:24.656467 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:24 crc kubenswrapper[4767]: I0127 16:39:24.656887 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:24 crc kubenswrapper[4767]: I0127 16:39:24.718251 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:24 crc kubenswrapper[4767]: I0127 16:39:24.779960 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:24 crc kubenswrapper[4767]: I0127 16:39:24.956287 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:26 crc kubenswrapper[4767]: I0127 16:39:26.701066 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b95c2" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="registry-server" containerID="cri-o://729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4" gracePeriod=2 Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.200608 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.354495 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content\") pod \"bae258ab-2734-4477-94bf-9c97269b44b7\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.354698 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities\") pod \"bae258ab-2734-4477-94bf-9c97269b44b7\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.354738 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn\") pod \"bae258ab-2734-4477-94bf-9c97269b44b7\" (UID: \"bae258ab-2734-4477-94bf-9c97269b44b7\") " Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.356631 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities" (OuterVolumeSpecName: "utilities") pod "bae258ab-2734-4477-94bf-9c97269b44b7" (UID: "bae258ab-2734-4477-94bf-9c97269b44b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.363448 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn" (OuterVolumeSpecName: "kube-api-access-zxfdn") pod "bae258ab-2734-4477-94bf-9c97269b44b7" (UID: "bae258ab-2734-4477-94bf-9c97269b44b7"). InnerVolumeSpecName "kube-api-access-zxfdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.446377 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bae258ab-2734-4477-94bf-9c97269b44b7" (UID: "bae258ab-2734-4477-94bf-9c97269b44b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.456860 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.457533 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bae258ab-2734-4477-94bf-9c97269b44b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.457549 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxfdn\" (UniqueName: \"kubernetes.io/projected/bae258ab-2734-4477-94bf-9c97269b44b7-kube-api-access-zxfdn\") on node \"crc\" DevicePath \"\"" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.713443 4767 generic.go:334] "Generic (PLEG): container finished" podID="bae258ab-2734-4477-94bf-9c97269b44b7" containerID="729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4" exitCode=0 Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.713505 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerDied","Data":"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4"} Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.713544 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b95c2" event={"ID":"bae258ab-2734-4477-94bf-9c97269b44b7","Type":"ContainerDied","Data":"3bf3b7806fe8f73d10ff4bf1fcca1e103d0387c96ad2b1cefe05efc95ec2f424"} Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.713558 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b95c2" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.713575 4767 scope.go:117] "RemoveContainer" containerID="729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.754360 4767 scope.go:117] "RemoveContainer" containerID="06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.773565 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.779843 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b95c2"] Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.787867 4767 scope.go:117] "RemoveContainer" containerID="bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.831102 4767 scope.go:117] "RemoveContainer" containerID="729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4" Jan 27 16:39:27 crc kubenswrapper[4767]: E0127 16:39:27.831649 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4\": container with ID starting with 729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4 not found: ID does not exist" containerID="729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.831705 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4"} err="failed to get container status \"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4\": rpc error: code = NotFound desc = could not find container \"729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4\": container with ID starting with 729325d2167f6048c37bc4ae94857498e254abfeac16b418f0a6841cbce103d4 not found: ID does not exist" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.831740 4767 scope.go:117] "RemoveContainer" containerID="06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7" Jan 27 16:39:27 crc kubenswrapper[4767]: E0127 16:39:27.832090 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7\": container with ID starting with 06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7 not found: ID does not exist" containerID="06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.832176 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7"} err="failed to get container status \"06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7\": rpc error: code = NotFound desc = could not find container \"06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7\": container with ID starting with 06a6ed57289daecfe78aa5fde94c4598ff61ebf38b73f2f183292a571c8067d7 not found: ID does not exist" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.832291 4767 scope.go:117] "RemoveContainer" containerID="bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645" Jan 27 16:39:27 crc kubenswrapper[4767]: E0127 16:39:27.832788 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645\": container with ID starting with bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645 not found: ID does not exist" containerID="bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645" Jan 27 16:39:27 crc kubenswrapper[4767]: I0127 16:39:27.832814 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645"} err="failed to get container status \"bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645\": rpc error: code = NotFound desc = could not find container \"bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645\": container with ID starting with bf05916b127021fefc3f345974be319ce6895ef31287761698b5d0c8c6564645 not found: ID does not exist" Jan 27 16:39:28 crc kubenswrapper[4767]: I0127 16:39:28.340090 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" path="/var/lib/kubelet/pods/bae258ab-2734-4477-94bf-9c97269b44b7/volumes" Jan 27 16:40:54 crc kubenswrapper[4767]: I0127 16:40:54.857580 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:40:54 crc kubenswrapper[4767]: I0127 16:40:54.859249 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:41:24 crc kubenswrapper[4767]: I0127 16:41:24.857902 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:41:24 crc kubenswrapper[4767]: I0127 16:41:24.858513 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:41:54 crc kubenswrapper[4767]: I0127 16:41:54.858346 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:41:54 crc kubenswrapper[4767]: I0127 16:41:54.858837 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:41:54 crc kubenswrapper[4767]: I0127 16:41:54.858884 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:41:54 crc kubenswrapper[4767]: I0127 16:41:54.859546 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:41:54 crc kubenswrapper[4767]: I0127 16:41:54.859623 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" gracePeriod=600 Jan 27 16:41:54 crc kubenswrapper[4767]: E0127 16:41:54.990583 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:41:55 crc kubenswrapper[4767]: I0127 16:41:55.933528 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" exitCode=0 Jan 27 16:41:55 crc kubenswrapper[4767]: I0127 16:41:55.933598 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1"} Jan 27 16:41:55 crc kubenswrapper[4767]: I0127 16:41:55.935049 4767 scope.go:117] "RemoveContainer" containerID="a373bc21e18d5895e7fd90d5a964bb5e6e5f56029193b2ab069d00d8991a86a1" Jan 27 16:41:55 crc kubenswrapper[4767]: I0127 16:41:55.935775 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:41:55 crc kubenswrapper[4767]: E0127 16:41:55.936032 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:42:11 crc kubenswrapper[4767]: I0127 16:42:11.326297 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:42:11 crc kubenswrapper[4767]: E0127 16:42:11.326927 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.940075 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:24 crc kubenswrapper[4767]: E0127 16:42:24.943648 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="registry-server" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.943692 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="registry-server" Jan 27 16:42:24 crc kubenswrapper[4767]: E0127 16:42:24.943713 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="extract-content" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.943721 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="extract-content" Jan 27 16:42:24 crc kubenswrapper[4767]: E0127 16:42:24.943743 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="extract-utilities" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.943752 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="extract-utilities" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.943921 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae258ab-2734-4477-94bf-9c97269b44b7" containerName="registry-server" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.945359 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:24 crc kubenswrapper[4767]: I0127 16:42:24.956317 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.033895 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.033946 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.034028 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqgm\" (UniqueName: \"kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.135026 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.135090 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.135175 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqgm\" (UniqueName: \"kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.135696 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.135735 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.156551 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqgm\" (UniqueName: \"kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm\") pod \"redhat-marketplace-m9cwx\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.281521 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.326295 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:42:25 crc kubenswrapper[4767]: E0127 16:42:25.326515 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:42:25 crc kubenswrapper[4767]: I0127 16:42:25.531917 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:26 crc kubenswrapper[4767]: I0127 16:42:26.165471 4767 generic.go:334] "Generic (PLEG): container finished" podID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerID="ae8cb1c05d29daee31a5bd812281b45bc6b4f0b776eadb8214c1adacb24f936b" exitCode=0 Jan 27 16:42:26 crc kubenswrapper[4767]: I0127 16:42:26.165528 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerDied","Data":"ae8cb1c05d29daee31a5bd812281b45bc6b4f0b776eadb8214c1adacb24f936b"} Jan 27 16:42:26 crc kubenswrapper[4767]: I0127 16:42:26.165789 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerStarted","Data":"c85023c045bce1a9113daa820308b8cda8a390173286c6f84b6a8d7ddb146bb0"} Jan 27 16:42:28 crc kubenswrapper[4767]: I0127 16:42:28.203887 4767 generic.go:334] "Generic (PLEG): container finished" podID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerID="0c15a5c9da642fafd808e398ad45fa5333e03e820611b3d2f4c4e2ae438d7c3a" exitCode=0 Jan 27 16:42:28 crc kubenswrapper[4767]: I0127 16:42:28.204311 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerDied","Data":"0c15a5c9da642fafd808e398ad45fa5333e03e820611b3d2f4c4e2ae438d7c3a"} Jan 27 16:42:29 crc kubenswrapper[4767]: I0127 16:42:29.214066 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerStarted","Data":"fe2b77e65d838cc931f5b617b65ac5397e1e56d5b43c366df044d7e93d7dbe61"} Jan 27 16:42:29 crc kubenswrapper[4767]: I0127 16:42:29.243989 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9cwx" podStartSLOduration=2.723265076 podStartE2EDuration="5.243965231s" podCreationTimestamp="2026-01-27 16:42:24 +0000 UTC" firstStartedPulling="2026-01-27 16:42:26.167528297 +0000 UTC m=+3168.556545860" lastFinishedPulling="2026-01-27 16:42:28.688228492 +0000 UTC m=+3171.077246015" observedRunningTime="2026-01-27 16:42:29.241288656 +0000 UTC m=+3171.630306189" watchObservedRunningTime="2026-01-27 16:42:29.243965231 +0000 UTC m=+3171.632982764" Jan 27 16:42:35 crc kubenswrapper[4767]: I0127 16:42:35.281701 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:35 crc kubenswrapper[4767]: I0127 16:42:35.282133 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:35 crc kubenswrapper[4767]: I0127 16:42:35.336786 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:36 crc kubenswrapper[4767]: I0127 16:42:36.335683 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:36 crc kubenswrapper[4767]: I0127 16:42:36.390139 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:38 crc kubenswrapper[4767]: I0127 16:42:38.275228 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9cwx" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="registry-server" containerID="cri-o://fe2b77e65d838cc931f5b617b65ac5397e1e56d5b43c366df044d7e93d7dbe61" gracePeriod=2 Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.286351 4767 generic.go:334] "Generic (PLEG): container finished" podID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerID="fe2b77e65d838cc931f5b617b65ac5397e1e56d5b43c366df044d7e93d7dbe61" exitCode=0 Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.286457 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerDied","Data":"fe2b77e65d838cc931f5b617b65ac5397e1e56d5b43c366df044d7e93d7dbe61"} Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.325802 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:42:39 crc kubenswrapper[4767]: E0127 16:42:39.326475 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.610053 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.654841 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content\") pod \"378dfc35-2bf3-4d80-948b-ed1114bf4376\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.654902 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities\") pod \"378dfc35-2bf3-4d80-948b-ed1114bf4376\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.654990 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khqgm\" (UniqueName: \"kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm\") pod \"378dfc35-2bf3-4d80-948b-ed1114bf4376\" (UID: \"378dfc35-2bf3-4d80-948b-ed1114bf4376\") " Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.656451 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities" (OuterVolumeSpecName: "utilities") pod "378dfc35-2bf3-4d80-948b-ed1114bf4376" (UID: "378dfc35-2bf3-4d80-948b-ed1114bf4376"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.661045 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm" (OuterVolumeSpecName: "kube-api-access-khqgm") pod "378dfc35-2bf3-4d80-948b-ed1114bf4376" (UID: "378dfc35-2bf3-4d80-948b-ed1114bf4376"). InnerVolumeSpecName "kube-api-access-khqgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.686026 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "378dfc35-2bf3-4d80-948b-ed1114bf4376" (UID: "378dfc35-2bf3-4d80-948b-ed1114bf4376"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.756633 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khqgm\" (UniqueName: \"kubernetes.io/projected/378dfc35-2bf3-4d80-948b-ed1114bf4376-kube-api-access-khqgm\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.756683 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:39 crc kubenswrapper[4767]: I0127 16:42:39.756705 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/378dfc35-2bf3-4d80-948b-ed1114bf4376-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.297044 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwx" event={"ID":"378dfc35-2bf3-4d80-948b-ed1114bf4376","Type":"ContainerDied","Data":"c85023c045bce1a9113daa820308b8cda8a390173286c6f84b6a8d7ddb146bb0"} Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.297119 4767 scope.go:117] "RemoveContainer" containerID="fe2b77e65d838cc931f5b617b65ac5397e1e56d5b43c366df044d7e93d7dbe61" Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.297173 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwx" Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.321532 4767 scope.go:117] "RemoveContainer" containerID="0c15a5c9da642fafd808e398ad45fa5333e03e820611b3d2f4c4e2ae438d7c3a" Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.348585 4767 scope.go:117] "RemoveContainer" containerID="ae8cb1c05d29daee31a5bd812281b45bc6b4f0b776eadb8214c1adacb24f936b" Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.394540 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:40 crc kubenswrapper[4767]: I0127 16:42:40.400329 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwx"] Jan 27 16:42:42 crc kubenswrapper[4767]: I0127 16:42:42.345317 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" path="/var/lib/kubelet/pods/378dfc35-2bf3-4d80-948b-ed1114bf4376/volumes" Jan 27 16:42:53 crc kubenswrapper[4767]: I0127 16:42:53.325763 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:42:53 crc kubenswrapper[4767]: E0127 16:42:53.326945 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:43:05 crc kubenswrapper[4767]: I0127 16:43:05.325799 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:43:05 crc kubenswrapper[4767]: E0127 16:43:05.326604 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:43:18 crc kubenswrapper[4767]: I0127 16:43:18.330254 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:43:18 crc kubenswrapper[4767]: E0127 16:43:18.331504 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:43:30 crc kubenswrapper[4767]: I0127 16:43:30.325745 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:43:30 crc kubenswrapper[4767]: E0127 16:43:30.326497 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:43:42 crc kubenswrapper[4767]: I0127 16:43:42.328391 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:43:42 crc kubenswrapper[4767]: E0127 16:43:42.329675 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:43:56 crc kubenswrapper[4767]: I0127 16:43:56.326053 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:43:56 crc kubenswrapper[4767]: E0127 16:43:56.327136 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:44:09 crc kubenswrapper[4767]: I0127 16:44:09.325755 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:44:09 crc kubenswrapper[4767]: E0127 16:44:09.326623 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.565052 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:17 crc kubenswrapper[4767]: E0127 16:44:17.568235 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="extract-utilities" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.568476 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="extract-utilities" Jan 27 16:44:17 crc kubenswrapper[4767]: E0127 16:44:17.568671 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="registry-server" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.568896 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="registry-server" Jan 27 16:44:17 crc kubenswrapper[4767]: E0127 16:44:17.569098 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="extract-content" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.569308 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="extract-content" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.569819 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="378dfc35-2bf3-4d80-948b-ed1114bf4376" containerName="registry-server" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.572663 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.585231 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.621456 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.621536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.621600 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgt94\" (UniqueName: \"kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.723003 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.723092 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.723407 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgt94\" (UniqueName: \"kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.723747 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.724171 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.747089 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgt94\" (UniqueName: \"kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94\") pod \"redhat-operators-zqqq5\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:17 crc kubenswrapper[4767]: I0127 16:44:17.933714 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:18 crc kubenswrapper[4767]: I0127 16:44:18.374246 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:19 crc kubenswrapper[4767]: I0127 16:44:19.082367 4767 generic.go:334] "Generic (PLEG): container finished" podID="7769a627-1ace-47fe-877b-28afc29b1d11" containerID="20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c" exitCode=0 Jan 27 16:44:19 crc kubenswrapper[4767]: I0127 16:44:19.082478 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerDied","Data":"20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c"} Jan 27 16:44:19 crc kubenswrapper[4767]: I0127 16:44:19.082700 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerStarted","Data":"77d1c765262bab42131a20c0258748060eb984d07fed4fee15cc97d817c46f2c"} Jan 27 16:44:19 crc kubenswrapper[4767]: I0127 16:44:19.086268 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:44:20 crc kubenswrapper[4767]: I0127 16:44:20.094819 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerStarted","Data":"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d"} Jan 27 16:44:21 crc kubenswrapper[4767]: I0127 16:44:21.102901 4767 generic.go:334] "Generic (PLEG): container finished" podID="7769a627-1ace-47fe-877b-28afc29b1d11" containerID="53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d" exitCode=0 Jan 27 16:44:21 crc kubenswrapper[4767]: I0127 16:44:21.102952 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerDied","Data":"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d"} Jan 27 16:44:21 crc kubenswrapper[4767]: I0127 16:44:21.325660 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:44:21 crc kubenswrapper[4767]: E0127 16:44:21.326027 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:44:22 crc kubenswrapper[4767]: I0127 16:44:22.113065 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerStarted","Data":"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7"} Jan 27 16:44:22 crc kubenswrapper[4767]: I0127 16:44:22.139796 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqqq5" podStartSLOduration=2.612666697 podStartE2EDuration="5.139774353s" podCreationTimestamp="2026-01-27 16:44:17 +0000 UTC" firstStartedPulling="2026-01-27 16:44:19.085977808 +0000 UTC m=+3281.474995331" lastFinishedPulling="2026-01-27 16:44:21.613085464 +0000 UTC m=+3284.002102987" observedRunningTime="2026-01-27 16:44:22.133818764 +0000 UTC m=+3284.522836297" watchObservedRunningTime="2026-01-27 16:44:22.139774353 +0000 UTC m=+3284.528791876" Jan 27 16:44:27 crc kubenswrapper[4767]: I0127 16:44:27.934700 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:27 crc kubenswrapper[4767]: I0127 16:44:27.936897 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:28 crc kubenswrapper[4767]: I0127 16:44:28.992540 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqqq5" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="registry-server" probeResult="failure" output=< Jan 27 16:44:28 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 16:44:28 crc kubenswrapper[4767]: > Jan 27 16:44:34 crc kubenswrapper[4767]: I0127 16:44:34.326406 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:44:34 crc kubenswrapper[4767]: E0127 16:44:34.327571 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:44:37 crc kubenswrapper[4767]: I0127 16:44:37.989944 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:38 crc kubenswrapper[4767]: I0127 16:44:38.064353 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:38 crc kubenswrapper[4767]: I0127 16:44:38.238840 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.256000 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqqq5" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="registry-server" containerID="cri-o://d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7" gracePeriod=2 Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.648047 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.777552 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content\") pod \"7769a627-1ace-47fe-877b-28afc29b1d11\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.777695 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities\") pod \"7769a627-1ace-47fe-877b-28afc29b1d11\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.777759 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgt94\" (UniqueName: \"kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94\") pod \"7769a627-1ace-47fe-877b-28afc29b1d11\" (UID: \"7769a627-1ace-47fe-877b-28afc29b1d11\") " Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.779225 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities" (OuterVolumeSpecName: "utilities") pod "7769a627-1ace-47fe-877b-28afc29b1d11" (UID: "7769a627-1ace-47fe-877b-28afc29b1d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.783248 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94" (OuterVolumeSpecName: "kube-api-access-kgt94") pod "7769a627-1ace-47fe-877b-28afc29b1d11" (UID: "7769a627-1ace-47fe-877b-28afc29b1d11"). InnerVolumeSpecName "kube-api-access-kgt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.879427 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.879482 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgt94\" (UniqueName: \"kubernetes.io/projected/7769a627-1ace-47fe-877b-28afc29b1d11-kube-api-access-kgt94\") on node \"crc\" DevicePath \"\"" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.971434 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7769a627-1ace-47fe-877b-28afc29b1d11" (UID: "7769a627-1ace-47fe-877b-28afc29b1d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:44:39 crc kubenswrapper[4767]: I0127 16:44:39.981267 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7769a627-1ace-47fe-877b-28afc29b1d11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.267319 4767 generic.go:334] "Generic (PLEG): container finished" podID="7769a627-1ace-47fe-877b-28afc29b1d11" containerID="d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7" exitCode=0 Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.267385 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerDied","Data":"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7"} Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.267426 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqqq5" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.267454 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqqq5" event={"ID":"7769a627-1ace-47fe-877b-28afc29b1d11","Type":"ContainerDied","Data":"77d1c765262bab42131a20c0258748060eb984d07fed4fee15cc97d817c46f2c"} Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.267491 4767 scope.go:117] "RemoveContainer" containerID="d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.299127 4767 scope.go:117] "RemoveContainer" containerID="53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.323262 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.339955 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqqq5"] Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.344057 4767 scope.go:117] "RemoveContainer" containerID="20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.374061 4767 scope.go:117] "RemoveContainer" containerID="d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7" Jan 27 16:44:40 crc kubenswrapper[4767]: E0127 16:44:40.374645 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7\": container with ID starting with d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7 not found: ID does not exist" containerID="d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.374678 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7"} err="failed to get container status \"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7\": rpc error: code = NotFound desc = could not find container \"d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7\": container with ID starting with d86889136f7d0b5facd024ee036af7444d2d6707d4f0300c134249b02cb61da7 not found: ID does not exist" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.374701 4767 scope.go:117] "RemoveContainer" containerID="53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d" Jan 27 16:44:40 crc kubenswrapper[4767]: E0127 16:44:40.375142 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d\": container with ID starting with 53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d not found: ID does not exist" containerID="53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.375333 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d"} err="failed to get container status \"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d\": rpc error: code = NotFound desc = could not find container \"53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d\": container with ID starting with 53f790db834403f9d5cc2980536d9996f9dc896eaced591df40c9f017603ff8d not found: ID does not exist" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.375374 4767 scope.go:117] "RemoveContainer" containerID="20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c" Jan 27 16:44:40 crc kubenswrapper[4767]: E0127 16:44:40.375853 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c\": container with ID starting with 20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c not found: ID does not exist" containerID="20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c" Jan 27 16:44:40 crc kubenswrapper[4767]: I0127 16:44:40.375899 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c"} err="failed to get container status \"20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c\": rpc error: code = NotFound desc = could not find container \"20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c\": container with ID starting with 20bbddddbfbb79c3455467b6977eb1ea21bfda051b0f9ab30b68249f9e17dd1c not found: ID does not exist" Jan 27 16:44:42 crc kubenswrapper[4767]: I0127 16:44:42.336889 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" path="/var/lib/kubelet/pods/7769a627-1ace-47fe-877b-28afc29b1d11/volumes" Jan 27 16:44:49 crc kubenswrapper[4767]: I0127 16:44:49.325154 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:44:49 crc kubenswrapper[4767]: E0127 16:44:49.325839 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.151425 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk"] Jan 27 16:45:00 crc kubenswrapper[4767]: E0127 16:45:00.152603 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="extract-utilities" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.152627 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="extract-utilities" Jan 27 16:45:00 crc kubenswrapper[4767]: E0127 16:45:00.152654 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="extract-content" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.152666 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="extract-content" Jan 27 16:45:00 crc kubenswrapper[4767]: E0127 16:45:00.152691 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.152704 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.152944 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="7769a627-1ace-47fe-877b-28afc29b1d11" containerName="registry-server" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.153775 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.159910 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk"] Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.192966 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.193027 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.294536 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.294608 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4jx\" (UniqueName: \"kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.294641 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.396307 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.396632 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4jx\" (UniqueName: \"kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.396683 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.397351 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.403024 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.416178 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4jx\" (UniqueName: \"kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx\") pod \"collect-profiles-29492205-xxdkk\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.524749 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:00 crc kubenswrapper[4767]: I0127 16:45:00.927622 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk"] Jan 27 16:45:01 crc kubenswrapper[4767]: I0127 16:45:01.444156 4767 generic.go:334] "Generic (PLEG): container finished" podID="fc85046e-7e1d-41cc-8b24-0bff2e9dd764" containerID="958b21b1c38bb6e6f36dee22b806cefd6327b0a7876fef78a6e8ee80db3d60b2" exitCode=0 Jan 27 16:45:01 crc kubenswrapper[4767]: I0127 16:45:01.444198 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" event={"ID":"fc85046e-7e1d-41cc-8b24-0bff2e9dd764","Type":"ContainerDied","Data":"958b21b1c38bb6e6f36dee22b806cefd6327b0a7876fef78a6e8ee80db3d60b2"} Jan 27 16:45:01 crc kubenswrapper[4767]: I0127 16:45:01.444236 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" event={"ID":"fc85046e-7e1d-41cc-8b24-0bff2e9dd764","Type":"ContainerStarted","Data":"238d9d42ac1963f11eeccc85f883b37139f01c02688b1eefbd1c2b46bddbfb89"} Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.752933 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.932661 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl4jx\" (UniqueName: \"kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx\") pod \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.932736 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume\") pod \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.932831 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume\") pod \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\" (UID: \"fc85046e-7e1d-41cc-8b24-0bff2e9dd764\") " Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.934543 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc85046e-7e1d-41cc-8b24-0bff2e9dd764" (UID: "fc85046e-7e1d-41cc-8b24-0bff2e9dd764"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.938030 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx" (OuterVolumeSpecName: "kube-api-access-pl4jx") pod "fc85046e-7e1d-41cc-8b24-0bff2e9dd764" (UID: "fc85046e-7e1d-41cc-8b24-0bff2e9dd764"). InnerVolumeSpecName "kube-api-access-pl4jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:45:02 crc kubenswrapper[4767]: I0127 16:45:02.943450 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc85046e-7e1d-41cc-8b24-0bff2e9dd764" (UID: "fc85046e-7e1d-41cc-8b24-0bff2e9dd764"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.034599 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.035019 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl4jx\" (UniqueName: \"kubernetes.io/projected/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-kube-api-access-pl4jx\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.035077 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc85046e-7e1d-41cc-8b24-0bff2e9dd764-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.461001 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" event={"ID":"fc85046e-7e1d-41cc-8b24-0bff2e9dd764","Type":"ContainerDied","Data":"238d9d42ac1963f11eeccc85f883b37139f01c02688b1eefbd1c2b46bddbfb89"} Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.461383 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238d9d42ac1963f11eeccc85f883b37139f01c02688b1eefbd1c2b46bddbfb89" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.461246 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-xxdkk" Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.822104 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l"] Jan 27 16:45:03 crc kubenswrapper[4767]: I0127 16:45:03.829163 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-tkq4l"] Jan 27 16:45:04 crc kubenswrapper[4767]: I0127 16:45:04.325305 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:45:04 crc kubenswrapper[4767]: E0127 16:45:04.325698 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:45:04 crc kubenswrapper[4767]: I0127 16:45:04.337386 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2e87b5-39f5-453d-b824-925c37604298" path="/var/lib/kubelet/pods/ef2e87b5-39f5-453d-b824-925c37604298/volumes" Jan 27 16:45:19 crc kubenswrapper[4767]: I0127 16:45:19.325972 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:45:19 crc kubenswrapper[4767]: E0127 16:45:19.327237 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:45:31 crc kubenswrapper[4767]: I0127 16:45:31.325617 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:45:31 crc kubenswrapper[4767]: E0127 16:45:31.326419 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:45:42 crc kubenswrapper[4767]: I0127 16:45:42.777865 4767 scope.go:117] "RemoveContainer" containerID="9b453046a7cc8d0af396d649bbc97084f9b2437b18814a0626147bd3d596bbce" Jan 27 16:45:44 crc kubenswrapper[4767]: I0127 16:45:44.326428 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:45:44 crc kubenswrapper[4767]: E0127 16:45:44.327299 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:45:56 crc kubenswrapper[4767]: I0127 16:45:56.325488 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:45:56 crc kubenswrapper[4767]: E0127 16:45:56.327622 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:46:08 crc kubenswrapper[4767]: I0127 16:46:08.346725 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:46:08 crc kubenswrapper[4767]: E0127 16:46:08.348449 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:46:22 crc kubenswrapper[4767]: I0127 16:46:22.325632 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:46:22 crc kubenswrapper[4767]: E0127 16:46:22.326337 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:46:34 crc kubenswrapper[4767]: I0127 16:46:34.325313 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:46:34 crc kubenswrapper[4767]: E0127 16:46:34.325941 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:46:47 crc kubenswrapper[4767]: I0127 16:46:47.325602 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:46:47 crc kubenswrapper[4767]: E0127 16:46:47.326224 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:46:58 crc kubenswrapper[4767]: I0127 16:46:58.333314 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:46:59 crc kubenswrapper[4767]: I0127 16:46:59.353899 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757"} Jan 27 16:49:24 crc kubenswrapper[4767]: I0127 16:49:24.857409 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:49:24 crc kubenswrapper[4767]: I0127 16:49:24.857824 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:49:28 crc kubenswrapper[4767]: I0127 16:49:28.941116 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:28 crc kubenswrapper[4767]: E0127 16:49:28.941755 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc85046e-7e1d-41cc-8b24-0bff2e9dd764" containerName="collect-profiles" Jan 27 16:49:28 crc kubenswrapper[4767]: I0127 16:49:28.941769 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc85046e-7e1d-41cc-8b24-0bff2e9dd764" containerName="collect-profiles" Jan 27 16:49:28 crc kubenswrapper[4767]: I0127 16:49:28.941912 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc85046e-7e1d-41cc-8b24-0bff2e9dd764" containerName="collect-profiles" Jan 27 16:49:28 crc kubenswrapper[4767]: I0127 16:49:28.942964 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:28 crc kubenswrapper[4767]: I0127 16:49:28.990585 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.060904 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.061005 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggww\" (UniqueName: \"kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.061027 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.162093 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.162246 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggww\" (UniqueName: \"kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.162276 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.162696 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.162736 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.182657 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggww\" (UniqueName: \"kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww\") pod \"community-operators-qvs6k\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.291440 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.792427 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:29 crc kubenswrapper[4767]: I0127 16:49:29.951362 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerStarted","Data":"adbaa84183d504d74aaa98e70bc781fc79ccb2a276ef1b0ade30c9c5dc9fc8a5"} Jan 27 16:49:30 crc kubenswrapper[4767]: I0127 16:49:30.961803 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerID="43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41" exitCode=0 Jan 27 16:49:30 crc kubenswrapper[4767]: I0127 16:49:30.961860 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerDied","Data":"43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41"} Jan 27 16:49:30 crc kubenswrapper[4767]: I0127 16:49:30.964495 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:49:31 crc kubenswrapper[4767]: I0127 16:49:31.969375 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerStarted","Data":"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a"} Jan 27 16:49:32 crc kubenswrapper[4767]: I0127 16:49:32.980848 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerID="9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a" exitCode=0 Jan 27 16:49:32 crc kubenswrapper[4767]: I0127 16:49:32.980901 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerDied","Data":"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a"} Jan 27 16:49:33 crc kubenswrapper[4767]: I0127 16:49:33.991807 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerStarted","Data":"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96"} Jan 27 16:49:34 crc kubenswrapper[4767]: I0127 16:49:34.024108 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvs6k" podStartSLOduration=3.5840548500000002 podStartE2EDuration="6.024073941s" podCreationTimestamp="2026-01-27 16:49:28 +0000 UTC" firstStartedPulling="2026-01-27 16:49:30.963891476 +0000 UTC m=+3593.352909039" lastFinishedPulling="2026-01-27 16:49:33.403910567 +0000 UTC m=+3595.792928130" observedRunningTime="2026-01-27 16:49:34.0166408 +0000 UTC m=+3596.405658353" watchObservedRunningTime="2026-01-27 16:49:34.024073941 +0000 UTC m=+3596.413091514" Jan 27 16:49:39 crc kubenswrapper[4767]: I0127 16:49:39.292257 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:39 crc kubenswrapper[4767]: I0127 16:49:39.292859 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:39 crc kubenswrapper[4767]: I0127 16:49:39.374664 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:40 crc kubenswrapper[4767]: I0127 16:49:40.131967 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:40 crc kubenswrapper[4767]: I0127 16:49:40.204256 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.073255 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qvs6k" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="registry-server" containerID="cri-o://ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96" gracePeriod=2 Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.597742 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.774915 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content\") pod \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.775499 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities\") pod \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.775555 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggww\" (UniqueName: \"kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww\") pod \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\" (UID: \"dc5a3b51-e2ec-45af-b92a-dbb76998199b\") " Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.777476 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities" (OuterVolumeSpecName: "utilities") pod "dc5a3b51-e2ec-45af-b92a-dbb76998199b" (UID: "dc5a3b51-e2ec-45af-b92a-dbb76998199b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.783610 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww" (OuterVolumeSpecName: "kube-api-access-cggww") pod "dc5a3b51-e2ec-45af-b92a-dbb76998199b" (UID: "dc5a3b51-e2ec-45af-b92a-dbb76998199b"). InnerVolumeSpecName "kube-api-access-cggww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.868319 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5a3b51-e2ec-45af-b92a-dbb76998199b" (UID: "dc5a3b51-e2ec-45af-b92a-dbb76998199b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.878337 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.878410 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggww\" (UniqueName: \"kubernetes.io/projected/dc5a3b51-e2ec-45af-b92a-dbb76998199b-kube-api-access-cggww\") on node \"crc\" DevicePath \"\"" Jan 27 16:49:42 crc kubenswrapper[4767]: I0127 16:49:42.878444 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5a3b51-e2ec-45af-b92a-dbb76998199b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.085180 4767 generic.go:334] "Generic (PLEG): container finished" podID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerID="ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96" exitCode=0 Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.085285 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerDied","Data":"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96"} Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.085328 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvs6k" event={"ID":"dc5a3b51-e2ec-45af-b92a-dbb76998199b","Type":"ContainerDied","Data":"adbaa84183d504d74aaa98e70bc781fc79ccb2a276ef1b0ade30c9c5dc9fc8a5"} Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.085321 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvs6k" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.085355 4767 scope.go:117] "RemoveContainer" containerID="ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.112032 4767 scope.go:117] "RemoveContainer" containerID="9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.150614 4767 scope.go:117] "RemoveContainer" containerID="43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.151852 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.160090 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qvs6k"] Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.191767 4767 scope.go:117] "RemoveContainer" containerID="ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96" Jan 27 16:49:43 crc kubenswrapper[4767]: E0127 16:49:43.192294 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96\": container with ID starting with ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96 not found: ID does not exist" containerID="ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.192327 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96"} err="failed to get container status \"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96\": rpc error: code = NotFound desc = could not find container \"ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96\": container with ID starting with ae9d8217e8b33a685c8c9d19934c1f87c8b140ba87918f0b74c7c1d6f7257a96 not found: ID does not exist" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.192347 4767 scope.go:117] "RemoveContainer" containerID="9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a" Jan 27 16:49:43 crc kubenswrapper[4767]: E0127 16:49:43.192795 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a\": container with ID starting with 9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a not found: ID does not exist" containerID="9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.192817 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a"} err="failed to get container status \"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a\": rpc error: code = NotFound desc = could not find container \"9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a\": container with ID starting with 9c2c50aa459597d366a1a4a6563e506cb6a06122d9f55944ea643ac7e8d4de4a not found: ID does not exist" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.192831 4767 scope.go:117] "RemoveContainer" containerID="43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41" Jan 27 16:49:43 crc kubenswrapper[4767]: E0127 16:49:43.193306 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41\": container with ID starting with 43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41 not found: ID does not exist" containerID="43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41" Jan 27 16:49:43 crc kubenswrapper[4767]: I0127 16:49:43.193325 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41"} err="failed to get container status \"43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41\": rpc error: code = NotFound desc = could not find container \"43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41\": container with ID starting with 43b67b915d095fb1c540dbb4ae6eb6b0370cb45e02aa7f44e7e7f1d4748e3c41 not found: ID does not exist" Jan 27 16:49:44 crc kubenswrapper[4767]: I0127 16:49:44.339778 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" path="/var/lib/kubelet/pods/dc5a3b51-e2ec-45af-b92a-dbb76998199b/volumes" Jan 27 16:49:54 crc kubenswrapper[4767]: I0127 16:49:54.858279 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:49:54 crc kubenswrapper[4767]: I0127 16:49:54.859076 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.315300 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:00 crc kubenswrapper[4767]: E0127 16:50:00.315890 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="registry-server" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.315901 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="registry-server" Jan 27 16:50:00 crc kubenswrapper[4767]: E0127 16:50:00.315930 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="extract-content" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.315936 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="extract-content" Jan 27 16:50:00 crc kubenswrapper[4767]: E0127 16:50:00.315948 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="extract-utilities" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.315955 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="extract-utilities" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.316081 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5a3b51-e2ec-45af-b92a-dbb76998199b" containerName="registry-server" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.316982 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.338260 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.484481 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4rft\" (UniqueName: \"kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.484604 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.484797 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.585715 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.585794 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4rft\" (UniqueName: \"kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.585834 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.586444 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.586824 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.604120 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4rft\" (UniqueName: \"kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft\") pod \"certified-operators-v7b84\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:00 crc kubenswrapper[4767]: I0127 16:50:00.705947 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:01 crc kubenswrapper[4767]: I0127 16:50:01.147581 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:01 crc kubenswrapper[4767]: I0127 16:50:01.320242 4767 generic.go:334] "Generic (PLEG): container finished" podID="201e07ef-517b-4210-b053-de03952a2d98" containerID="a220838e3587dc124307d71ee38406915a8bf6f7603f9df57ea7a062aa953c98" exitCode=0 Jan 27 16:50:01 crc kubenswrapper[4767]: I0127 16:50:01.320278 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerDied","Data":"a220838e3587dc124307d71ee38406915a8bf6f7603f9df57ea7a062aa953c98"} Jan 27 16:50:01 crc kubenswrapper[4767]: I0127 16:50:01.320300 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerStarted","Data":"a5ff3d2c4307a8d87983088c4f7eee496a586386b85c99451393810802816ba3"} Jan 27 16:50:03 crc kubenswrapper[4767]: I0127 16:50:03.342744 4767 generic.go:334] "Generic (PLEG): container finished" podID="201e07ef-517b-4210-b053-de03952a2d98" containerID="bcccb88d3f91585d4078eded2f87fcc2c5e110c4648cc934623c6887adf4ae8a" exitCode=0 Jan 27 16:50:03 crc kubenswrapper[4767]: I0127 16:50:03.342803 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerDied","Data":"bcccb88d3f91585d4078eded2f87fcc2c5e110c4648cc934623c6887adf4ae8a"} Jan 27 16:50:04 crc kubenswrapper[4767]: I0127 16:50:04.356196 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerStarted","Data":"fec74f13081204b4411c8e4ac713050a68615f839d6ef57be46cb0093caae647"} Jan 27 16:50:04 crc kubenswrapper[4767]: I0127 16:50:04.385769 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7b84" podStartSLOduration=1.943447773 podStartE2EDuration="4.385749308s" podCreationTimestamp="2026-01-27 16:50:00 +0000 UTC" firstStartedPulling="2026-01-27 16:50:01.32156171 +0000 UTC m=+3623.710579253" lastFinishedPulling="2026-01-27 16:50:03.763863235 +0000 UTC m=+3626.152880788" observedRunningTime="2026-01-27 16:50:04.383122434 +0000 UTC m=+3626.772140017" watchObservedRunningTime="2026-01-27 16:50:04.385749308 +0000 UTC m=+3626.774766841" Jan 27 16:50:10 crc kubenswrapper[4767]: I0127 16:50:10.706862 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:10 crc kubenswrapper[4767]: I0127 16:50:10.708500 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:10 crc kubenswrapper[4767]: I0127 16:50:10.758868 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:11 crc kubenswrapper[4767]: I0127 16:50:11.489145 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:11 crc kubenswrapper[4767]: I0127 16:50:11.578376 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:13 crc kubenswrapper[4767]: I0127 16:50:13.438035 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v7b84" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="registry-server" containerID="cri-o://fec74f13081204b4411c8e4ac713050a68615f839d6ef57be46cb0093caae647" gracePeriod=2 Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.448198 4767 generic.go:334] "Generic (PLEG): container finished" podID="201e07ef-517b-4210-b053-de03952a2d98" containerID="fec74f13081204b4411c8e4ac713050a68615f839d6ef57be46cb0093caae647" exitCode=0 Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.448253 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerDied","Data":"fec74f13081204b4411c8e4ac713050a68615f839d6ef57be46cb0093caae647"} Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.512696 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.519792 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content\") pod \"201e07ef-517b-4210-b053-de03952a2d98\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.519839 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4rft\" (UniqueName: \"kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft\") pod \"201e07ef-517b-4210-b053-de03952a2d98\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.519896 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities\") pod \"201e07ef-517b-4210-b053-de03952a2d98\" (UID: \"201e07ef-517b-4210-b053-de03952a2d98\") " Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.520872 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities" (OuterVolumeSpecName: "utilities") pod "201e07ef-517b-4210-b053-de03952a2d98" (UID: "201e07ef-517b-4210-b053-de03952a2d98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.525233 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft" (OuterVolumeSpecName: "kube-api-access-k4rft") pod "201e07ef-517b-4210-b053-de03952a2d98" (UID: "201e07ef-517b-4210-b053-de03952a2d98"). InnerVolumeSpecName "kube-api-access-k4rft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.565678 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "201e07ef-517b-4210-b053-de03952a2d98" (UID: "201e07ef-517b-4210-b053-de03952a2d98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.620878 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.620907 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4rft\" (UniqueName: \"kubernetes.io/projected/201e07ef-517b-4210-b053-de03952a2d98-kube-api-access-k4rft\") on node \"crc\" DevicePath \"\"" Jan 27 16:50:14 crc kubenswrapper[4767]: I0127 16:50:14.620918 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/201e07ef-517b-4210-b053-de03952a2d98-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.465568 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7b84" event={"ID":"201e07ef-517b-4210-b053-de03952a2d98","Type":"ContainerDied","Data":"a5ff3d2c4307a8d87983088c4f7eee496a586386b85c99451393810802816ba3"} Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.465651 4767 scope.go:117] "RemoveContainer" containerID="fec74f13081204b4411c8e4ac713050a68615f839d6ef57be46cb0093caae647" Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.465662 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7b84" Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.535380 4767 scope.go:117] "RemoveContainer" containerID="bcccb88d3f91585d4078eded2f87fcc2c5e110c4648cc934623c6887adf4ae8a" Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.538579 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.558234 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v7b84"] Jan 27 16:50:15 crc kubenswrapper[4767]: I0127 16:50:15.561520 4767 scope.go:117] "RemoveContainer" containerID="a220838e3587dc124307d71ee38406915a8bf6f7603f9df57ea7a062aa953c98" Jan 27 16:50:16 crc kubenswrapper[4767]: I0127 16:50:16.341440 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201e07ef-517b-4210-b053-de03952a2d98" path="/var/lib/kubelet/pods/201e07ef-517b-4210-b053-de03952a2d98/volumes" Jan 27 16:50:24 crc kubenswrapper[4767]: I0127 16:50:24.858357 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:50:24 crc kubenswrapper[4767]: I0127 16:50:24.859192 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:50:24 crc kubenswrapper[4767]: I0127 16:50:24.859320 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:50:24 crc kubenswrapper[4767]: I0127 16:50:24.860367 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:50:24 crc kubenswrapper[4767]: I0127 16:50:24.860479 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757" gracePeriod=600 Jan 27 16:50:25 crc kubenswrapper[4767]: I0127 16:50:25.558840 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757" exitCode=0 Jan 27 16:50:25 crc kubenswrapper[4767]: I0127 16:50:25.558931 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757"} Jan 27 16:50:25 crc kubenswrapper[4767]: I0127 16:50:25.559116 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea"} Jan 27 16:50:25 crc kubenswrapper[4767]: I0127 16:50:25.559149 4767 scope.go:117] "RemoveContainer" containerID="ceec722135f8e1ad8e1f5e6e3b15d07433ffa0f060ae16fa1dc5e958b2ff65f1" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.055866 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:29 crc kubenswrapper[4767]: E0127 16:52:29.056697 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="extract-content" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.056710 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="extract-content" Jan 27 16:52:29 crc kubenswrapper[4767]: E0127 16:52:29.056728 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="registry-server" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.056734 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="registry-server" Jan 27 16:52:29 crc kubenswrapper[4767]: E0127 16:52:29.056748 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="extract-utilities" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.056754 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="extract-utilities" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.056893 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="201e07ef-517b-4210-b053-de03952a2d98" containerName="registry-server" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.057869 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.076390 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.232239 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.232281 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmpv\" (UniqueName: \"kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.232570 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.334551 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.334618 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.334638 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmpv\" (UniqueName: \"kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.335263 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.335335 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.354027 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmpv\" (UniqueName: \"kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv\") pod \"redhat-marketplace-k6sfd\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.381264 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:29 crc kubenswrapper[4767]: I0127 16:52:29.887328 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:30 crc kubenswrapper[4767]: I0127 16:52:30.715591 4767 generic.go:334] "Generic (PLEG): container finished" podID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerID="e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d" exitCode=0 Jan 27 16:52:30 crc kubenswrapper[4767]: I0127 16:52:30.715636 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerDied","Data":"e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d"} Jan 27 16:52:30 crc kubenswrapper[4767]: I0127 16:52:30.715884 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerStarted","Data":"b7f88649895172c196b79fb6e95c707fcfd00fc4f5b96055f51d85533ed3c038"} Jan 27 16:52:31 crc kubenswrapper[4767]: I0127 16:52:31.727662 4767 generic.go:334] "Generic (PLEG): container finished" podID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerID="67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a" exitCode=0 Jan 27 16:52:31 crc kubenswrapper[4767]: I0127 16:52:31.727756 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerDied","Data":"67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a"} Jan 27 16:52:32 crc kubenswrapper[4767]: I0127 16:52:32.739144 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerStarted","Data":"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17"} Jan 27 16:52:32 crc kubenswrapper[4767]: I0127 16:52:32.759463 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6sfd" podStartSLOduration=2.357786415 podStartE2EDuration="3.759440363s" podCreationTimestamp="2026-01-27 16:52:29 +0000 UTC" firstStartedPulling="2026-01-27 16:52:30.717258981 +0000 UTC m=+3773.106276504" lastFinishedPulling="2026-01-27 16:52:32.118912919 +0000 UTC m=+3774.507930452" observedRunningTime="2026-01-27 16:52:32.75723524 +0000 UTC m=+3775.146252763" watchObservedRunningTime="2026-01-27 16:52:32.759440363 +0000 UTC m=+3775.148457886" Jan 27 16:52:39 crc kubenswrapper[4767]: I0127 16:52:39.382489 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:39 crc kubenswrapper[4767]: I0127 16:52:39.383173 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:39 crc kubenswrapper[4767]: I0127 16:52:39.451035 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:39 crc kubenswrapper[4767]: I0127 16:52:39.839704 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:39 crc kubenswrapper[4767]: I0127 16:52:39.889736 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:41 crc kubenswrapper[4767]: I0127 16:52:41.819057 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6sfd" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="registry-server" containerID="cri-o://235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17" gracePeriod=2 Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.592236 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.743359 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gmpv\" (UniqueName: \"kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv\") pod \"1b25b7b1-3b08-4074-957a-1fde73e047d9\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.743464 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content\") pod \"1b25b7b1-3b08-4074-957a-1fde73e047d9\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.743705 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities\") pod \"1b25b7b1-3b08-4074-957a-1fde73e047d9\" (UID: \"1b25b7b1-3b08-4074-957a-1fde73e047d9\") " Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.745113 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities" (OuterVolumeSpecName: "utilities") pod "1b25b7b1-3b08-4074-957a-1fde73e047d9" (UID: "1b25b7b1-3b08-4074-957a-1fde73e047d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.748343 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv" (OuterVolumeSpecName: "kube-api-access-6gmpv") pod "1b25b7b1-3b08-4074-957a-1fde73e047d9" (UID: "1b25b7b1-3b08-4074-957a-1fde73e047d9"). InnerVolumeSpecName "kube-api-access-6gmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.773461 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b25b7b1-3b08-4074-957a-1fde73e047d9" (UID: "1b25b7b1-3b08-4074-957a-1fde73e047d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.829643 4767 generic.go:334] "Generic (PLEG): container finished" podID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerID="235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17" exitCode=0 Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.829686 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerDied","Data":"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17"} Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.829723 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6sfd" event={"ID":"1b25b7b1-3b08-4074-957a-1fde73e047d9","Type":"ContainerDied","Data":"b7f88649895172c196b79fb6e95c707fcfd00fc4f5b96055f51d85533ed3c038"} Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.829745 4767 scope.go:117] "RemoveContainer" containerID="235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.829741 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6sfd" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.845849 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.845883 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gmpv\" (UniqueName: \"kubernetes.io/projected/1b25b7b1-3b08-4074-957a-1fde73e047d9-kube-api-access-6gmpv\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.845900 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b25b7b1-3b08-4074-957a-1fde73e047d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.849608 4767 scope.go:117] "RemoveContainer" containerID="67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.867268 4767 scope.go:117] "RemoveContainer" containerID="e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.877318 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.882721 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6sfd"] Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.903358 4767 scope.go:117] "RemoveContainer" containerID="235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17" Jan 27 16:52:42 crc kubenswrapper[4767]: E0127 16:52:42.903871 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17\": container with ID starting with 235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17 not found: ID does not exist" containerID="235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.903930 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17"} err="failed to get container status \"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17\": rpc error: code = NotFound desc = could not find container \"235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17\": container with ID starting with 235ab2b448cfdda082f1392a9c950fcff8ce24504e84909c5c6c25577b195f17 not found: ID does not exist" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.903963 4767 scope.go:117] "RemoveContainer" containerID="67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a" Jan 27 16:52:42 crc kubenswrapper[4767]: E0127 16:52:42.904267 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a\": container with ID starting with 67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a not found: ID does not exist" containerID="67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.904316 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a"} err="failed to get container status \"67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a\": rpc error: code = NotFound desc = could not find container \"67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a\": container with ID starting with 67367381a7e9626b5cb0df9822e5a488f017abef1f07fbd5fa87bdb07d53b24a not found: ID does not exist" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.904340 4767 scope.go:117] "RemoveContainer" containerID="e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d" Jan 27 16:52:42 crc kubenswrapper[4767]: E0127 16:52:42.904657 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d\": container with ID starting with e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d not found: ID does not exist" containerID="e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d" Jan 27 16:52:42 crc kubenswrapper[4767]: I0127 16:52:42.904687 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d"} err="failed to get container status \"e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d\": rpc error: code = NotFound desc = could not find container \"e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d\": container with ID starting with e16abf88b1aad30399861ea0e09eed042f7a992317ad41b23a1e8f89f311ae3d not found: ID does not exist" Jan 27 16:52:44 crc kubenswrapper[4767]: I0127 16:52:44.343086 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" path="/var/lib/kubelet/pods/1b25b7b1-3b08-4074-957a-1fde73e047d9/volumes" Jan 27 16:52:54 crc kubenswrapper[4767]: I0127 16:52:54.858546 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:52:54 crc kubenswrapper[4767]: I0127 16:52:54.859514 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:53:24 crc kubenswrapper[4767]: I0127 16:53:24.857620 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:53:24 crc kubenswrapper[4767]: I0127 16:53:24.858321 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:53:54 crc kubenswrapper[4767]: I0127 16:53:54.857791 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:53:54 crc kubenswrapper[4767]: I0127 16:53:54.858557 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:53:54 crc kubenswrapper[4767]: I0127 16:53:54.858625 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 16:53:54 crc kubenswrapper[4767]: I0127 16:53:54.859820 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:53:54 crc kubenswrapper[4767]: I0127 16:53:54.860073 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" gracePeriod=600 Jan 27 16:53:55 crc kubenswrapper[4767]: E0127 16:53:55.012738 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:53:55 crc kubenswrapper[4767]: I0127 16:53:55.517281 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" exitCode=0 Jan 27 16:53:55 crc kubenswrapper[4767]: I0127 16:53:55.517417 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea"} Jan 27 16:53:55 crc kubenswrapper[4767]: I0127 16:53:55.517937 4767 scope.go:117] "RemoveContainer" containerID="6a3ca7cc7b0750e94e6856f9899c8abe2a5d569a35236e157398abf3b01c2757" Jan 27 16:53:55 crc kubenswrapper[4767]: I0127 16:53:55.518700 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:53:55 crc kubenswrapper[4767]: E0127 16:53:55.519178 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:54:09 crc kubenswrapper[4767]: I0127 16:54:09.326800 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:54:09 crc kubenswrapper[4767]: E0127 16:54:09.327927 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:54:21 crc kubenswrapper[4767]: I0127 16:54:21.326004 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:54:21 crc kubenswrapper[4767]: E0127 16:54:21.327099 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:54:36 crc kubenswrapper[4767]: I0127 16:54:36.325078 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:54:36 crc kubenswrapper[4767]: E0127 16:54:36.327366 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:54:51 crc kubenswrapper[4767]: I0127 16:54:51.326372 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:54:51 crc kubenswrapper[4767]: E0127 16:54:51.327772 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:55:02 crc kubenswrapper[4767]: I0127 16:55:02.326143 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:55:02 crc kubenswrapper[4767]: E0127 16:55:02.327034 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:55:17 crc kubenswrapper[4767]: I0127 16:55:17.326237 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:55:17 crc kubenswrapper[4767]: E0127 16:55:17.327181 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:55:32 crc kubenswrapper[4767]: I0127 16:55:32.325759 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:55:32 crc kubenswrapper[4767]: E0127 16:55:32.326953 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.179416 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:34 crc kubenswrapper[4767]: E0127 16:55:34.180110 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="extract-utilities" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.180143 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="extract-utilities" Jan 27 16:55:34 crc kubenswrapper[4767]: E0127 16:55:34.180156 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="extract-content" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.180162 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="extract-content" Jan 27 16:55:34 crc kubenswrapper[4767]: E0127 16:55:34.180177 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="registry-server" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.180184 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="registry-server" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.180399 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b25b7b1-3b08-4074-957a-1fde73e047d9" containerName="registry-server" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.181666 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.206162 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.339381 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.339442 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlqz\" (UniqueName: \"kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.339470 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.440596 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlqz\" (UniqueName: \"kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.440666 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.440783 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.441402 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.441475 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.461978 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlqz\" (UniqueName: \"kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz\") pod \"redhat-operators-n2mdr\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:34 crc kubenswrapper[4767]: I0127 16:55:34.523801 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:35 crc kubenswrapper[4767]: I0127 16:55:34.985743 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:35 crc kubenswrapper[4767]: I0127 16:55:35.406179 4767 generic.go:334] "Generic (PLEG): container finished" podID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerID="180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8" exitCode=0 Jan 27 16:55:35 crc kubenswrapper[4767]: I0127 16:55:35.406281 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerDied","Data":"180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8"} Jan 27 16:55:35 crc kubenswrapper[4767]: I0127 16:55:35.406314 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerStarted","Data":"282c269499a99420c1ec3ad963e33b293e44b58dfd545e1bf7af20452203e4d4"} Jan 27 16:55:35 crc kubenswrapper[4767]: I0127 16:55:35.408059 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:55:36 crc kubenswrapper[4767]: I0127 16:55:36.418377 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerStarted","Data":"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89"} Jan 27 16:55:37 crc kubenswrapper[4767]: I0127 16:55:37.428689 4767 generic.go:334] "Generic (PLEG): container finished" podID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerID="3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89" exitCode=0 Jan 27 16:55:37 crc kubenswrapper[4767]: I0127 16:55:37.429118 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerDied","Data":"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89"} Jan 27 16:55:38 crc kubenswrapper[4767]: I0127 16:55:38.443651 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerStarted","Data":"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102"} Jan 27 16:55:38 crc kubenswrapper[4767]: I0127 16:55:38.465978 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n2mdr" podStartSLOduration=2.005238462 podStartE2EDuration="4.465953154s" podCreationTimestamp="2026-01-27 16:55:34 +0000 UTC" firstStartedPulling="2026-01-27 16:55:35.407741447 +0000 UTC m=+3957.796758990" lastFinishedPulling="2026-01-27 16:55:37.868456149 +0000 UTC m=+3960.257473682" observedRunningTime="2026-01-27 16:55:38.463823293 +0000 UTC m=+3960.852840886" watchObservedRunningTime="2026-01-27 16:55:38.465953154 +0000 UTC m=+3960.854970717" Jan 27 16:55:44 crc kubenswrapper[4767]: I0127 16:55:44.524988 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:44 crc kubenswrapper[4767]: I0127 16:55:44.525082 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:45 crc kubenswrapper[4767]: I0127 16:55:45.325881 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:55:45 crc kubenswrapper[4767]: E0127 16:55:45.326660 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:55:45 crc kubenswrapper[4767]: I0127 16:55:45.604333 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n2mdr" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="registry-server" probeResult="failure" output=< Jan 27 16:55:45 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 16:55:45 crc kubenswrapper[4767]: > Jan 27 16:55:54 crc kubenswrapper[4767]: I0127 16:55:54.585543 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:54 crc kubenswrapper[4767]: I0127 16:55:54.625623 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:54 crc kubenswrapper[4767]: I0127 16:55:54.827455 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:56 crc kubenswrapper[4767]: I0127 16:55:56.590706 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n2mdr" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="registry-server" containerID="cri-o://6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102" gracePeriod=2 Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.284176 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.423620 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities\") pod \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.423782 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nlqz\" (UniqueName: \"kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz\") pod \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.423856 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content\") pod \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\" (UID: \"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32\") " Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.425562 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities" (OuterVolumeSpecName: "utilities") pod "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" (UID: "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.430923 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz" (OuterVolumeSpecName: "kube-api-access-4nlqz") pod "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" (UID: "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32"). InnerVolumeSpecName "kube-api-access-4nlqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.525881 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.525916 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nlqz\" (UniqueName: \"kubernetes.io/projected/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-kube-api-access-4nlqz\") on node \"crc\" DevicePath \"\"" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.550499 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" (UID: "bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.601380 4767 generic.go:334] "Generic (PLEG): container finished" podID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerID="6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102" exitCode=0 Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.601458 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerDied","Data":"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102"} Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.601530 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n2mdr" event={"ID":"bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32","Type":"ContainerDied","Data":"282c269499a99420c1ec3ad963e33b293e44b58dfd545e1bf7af20452203e4d4"} Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.601542 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n2mdr" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.601554 4767 scope.go:117] "RemoveContainer" containerID="6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.626962 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.636313 4767 scope.go:117] "RemoveContainer" containerID="3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.651987 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.671311 4767 scope.go:117] "RemoveContainer" containerID="180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.672274 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n2mdr"] Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.692565 4767 scope.go:117] "RemoveContainer" containerID="6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102" Jan 27 16:55:57 crc kubenswrapper[4767]: E0127 16:55:57.693132 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102\": container with ID starting with 6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102 not found: ID does not exist" containerID="6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.693181 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102"} err="failed to get container status \"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102\": rpc error: code = NotFound desc = could not find container \"6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102\": container with ID starting with 6d0b273cb214105f3e6091cdbeaec4d98daeb5f1f90ace13f11726e4f5661102 not found: ID does not exist" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.693236 4767 scope.go:117] "RemoveContainer" containerID="3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89" Jan 27 16:55:57 crc kubenswrapper[4767]: E0127 16:55:57.693728 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89\": container with ID starting with 3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89 not found: ID does not exist" containerID="3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.693820 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89"} err="failed to get container status \"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89\": rpc error: code = NotFound desc = could not find container \"3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89\": container with ID starting with 3fb28bc220a1f34b4a9b7624c5ac96007a90e0af8149a763af985b900e041f89 not found: ID does not exist" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.693870 4767 scope.go:117] "RemoveContainer" containerID="180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8" Jan 27 16:55:57 crc kubenswrapper[4767]: E0127 16:55:57.694186 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8\": container with ID starting with 180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8 not found: ID does not exist" containerID="180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8" Jan 27 16:55:57 crc kubenswrapper[4767]: I0127 16:55:57.694303 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8"} err="failed to get container status \"180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8\": rpc error: code = NotFound desc = could not find container \"180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8\": container with ID starting with 180f218f98f6a764768dd84b6c852bda540033ae5ee7884958b896397f983fe8 not found: ID does not exist" Jan 27 16:55:58 crc kubenswrapper[4767]: I0127 16:55:58.342027 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" path="/var/lib/kubelet/pods/bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32/volumes" Jan 27 16:55:59 crc kubenswrapper[4767]: I0127 16:55:59.326192 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:55:59 crc kubenswrapper[4767]: E0127 16:55:59.326641 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:56:10 crc kubenswrapper[4767]: I0127 16:56:10.327299 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:56:10 crc kubenswrapper[4767]: E0127 16:56:10.328578 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:56:23 crc kubenswrapper[4767]: I0127 16:56:23.325558 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:56:23 crc kubenswrapper[4767]: E0127 16:56:23.326334 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:56:38 crc kubenswrapper[4767]: I0127 16:56:38.330032 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:56:38 crc kubenswrapper[4767]: E0127 16:56:38.331138 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:56:50 crc kubenswrapper[4767]: I0127 16:56:50.325824 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:56:50 crc kubenswrapper[4767]: E0127 16:56:50.326476 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:57:03 crc kubenswrapper[4767]: I0127 16:57:03.325711 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:57:03 crc kubenswrapper[4767]: E0127 16:57:03.326513 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:57:15 crc kubenswrapper[4767]: I0127 16:57:15.325537 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:57:15 crc kubenswrapper[4767]: E0127 16:57:15.326272 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:57:30 crc kubenswrapper[4767]: I0127 16:57:30.325379 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:57:30 crc kubenswrapper[4767]: E0127 16:57:30.325913 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:57:45 crc kubenswrapper[4767]: I0127 16:57:45.325918 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:57:45 crc kubenswrapper[4767]: E0127 16:57:45.326894 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:57:59 crc kubenswrapper[4767]: I0127 16:57:59.325547 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:57:59 crc kubenswrapper[4767]: E0127 16:57:59.326748 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:58:12 crc kubenswrapper[4767]: I0127 16:58:12.326178 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:58:12 crc kubenswrapper[4767]: E0127 16:58:12.327330 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:58:24 crc kubenswrapper[4767]: I0127 16:58:24.325464 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:58:24 crc kubenswrapper[4767]: E0127 16:58:24.326435 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:58:35 crc kubenswrapper[4767]: I0127 16:58:35.325577 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:58:35 crc kubenswrapper[4767]: E0127 16:58:35.326629 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:58:49 crc kubenswrapper[4767]: I0127 16:58:49.325467 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:58:49 crc kubenswrapper[4767]: E0127 16:58:49.327169 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 16:59:00 crc kubenswrapper[4767]: I0127 16:59:00.325139 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 16:59:01 crc kubenswrapper[4767]: I0127 16:59:01.274551 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892"} Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.193161 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz"] Jan 27 17:00:00 crc kubenswrapper[4767]: E0127 17:00:00.194048 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="registry-server" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.194068 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="registry-server" Jan 27 17:00:00 crc kubenswrapper[4767]: E0127 17:00:00.194080 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="extract-utilities" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.194088 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="extract-utilities" Jan 27 17:00:00 crc kubenswrapper[4767]: E0127 17:00:00.194115 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="extract-content" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.194123 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="extract-content" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.194317 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0b27c5-3cf8-4ab6-95d7-f5f640d02c32" containerName="registry-server" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.194760 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.196703 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.196807 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.209439 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz"] Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.294980 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.295033 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t6qk\" (UniqueName: \"kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.295310 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.396442 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.396766 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.396854 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t6qk\" (UniqueName: \"kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.398484 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.403401 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.421832 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t6qk\" (UniqueName: \"kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk\") pod \"collect-profiles-29492220-xhvqz\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.513462 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:00 crc kubenswrapper[4767]: I0127 17:00:00.971597 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz"] Jan 27 17:00:01 crc kubenswrapper[4767]: I0127 17:00:01.845628 4767 generic.go:334] "Generic (PLEG): container finished" podID="e83e49ae-c5f4-4060-9c21-305cea839622" containerID="1b3257c60097fb2271de2b927db5196f5c905b08a3c27a3672fe701b9201d3d1" exitCode=0 Jan 27 17:00:01 crc kubenswrapper[4767]: I0127 17:00:01.845691 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" event={"ID":"e83e49ae-c5f4-4060-9c21-305cea839622","Type":"ContainerDied","Data":"1b3257c60097fb2271de2b927db5196f5c905b08a3c27a3672fe701b9201d3d1"} Jan 27 17:00:01 crc kubenswrapper[4767]: I0127 17:00:01.847318 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" event={"ID":"e83e49ae-c5f4-4060-9c21-305cea839622","Type":"ContainerStarted","Data":"633fc7496d1097e1c164109fd1ad99dd1abd02cbc89621b8d1098d6bed9a6fc5"} Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.219402 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.358031 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume\") pod \"e83e49ae-c5f4-4060-9c21-305cea839622\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.358114 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t6qk\" (UniqueName: \"kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk\") pod \"e83e49ae-c5f4-4060-9c21-305cea839622\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.358167 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume\") pod \"e83e49ae-c5f4-4060-9c21-305cea839622\" (UID: \"e83e49ae-c5f4-4060-9c21-305cea839622\") " Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.358810 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume" (OuterVolumeSpecName: "config-volume") pod "e83e49ae-c5f4-4060-9c21-305cea839622" (UID: "e83e49ae-c5f4-4060-9c21-305cea839622"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.366381 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e83e49ae-c5f4-4060-9c21-305cea839622" (UID: "e83e49ae-c5f4-4060-9c21-305cea839622"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.378481 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk" (OuterVolumeSpecName: "kube-api-access-9t6qk") pod "e83e49ae-c5f4-4060-9c21-305cea839622" (UID: "e83e49ae-c5f4-4060-9c21-305cea839622"). InnerVolumeSpecName "kube-api-access-9t6qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.460343 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83e49ae-c5f4-4060-9c21-305cea839622-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.460381 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e83e49ae-c5f4-4060-9c21-305cea839622-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.460392 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t6qk\" (UniqueName: \"kubernetes.io/projected/e83e49ae-c5f4-4060-9c21-305cea839622-kube-api-access-9t6qk\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.873784 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" event={"ID":"e83e49ae-c5f4-4060-9c21-305cea839622","Type":"ContainerDied","Data":"633fc7496d1097e1c164109fd1ad99dd1abd02cbc89621b8d1098d6bed9a6fc5"} Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.873860 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633fc7496d1097e1c164109fd1ad99dd1abd02cbc89621b8d1098d6bed9a6fc5" Jan 27 17:00:03 crc kubenswrapper[4767]: I0127 17:00:03.873964 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-xhvqz" Jan 27 17:00:04 crc kubenswrapper[4767]: I0127 17:00:04.312629 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k"] Jan 27 17:00:04 crc kubenswrapper[4767]: I0127 17:00:04.319552 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-x2p7k"] Jan 27 17:00:04 crc kubenswrapper[4767]: I0127 17:00:04.333582 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696e1b3d-2da4-4734-9383-43f8c13791fe" path="/var/lib/kubelet/pods/696e1b3d-2da4-4734-9383-43f8c13791fe/volumes" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.027328 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-722hl"] Jan 27 17:00:09 crc kubenswrapper[4767]: E0127 17:00:09.028392 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83e49ae-c5f4-4060-9c21-305cea839622" containerName="collect-profiles" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.028416 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83e49ae-c5f4-4060-9c21-305cea839622" containerName="collect-profiles" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.028663 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83e49ae-c5f4-4060-9c21-305cea839622" containerName="collect-profiles" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.029984 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.053001 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-722hl"] Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.145809 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7xt\" (UniqueName: \"kubernetes.io/projected/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-kube-api-access-tp7xt\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.146035 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-utilities\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.146167 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-catalog-content\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.247863 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-catalog-content\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.247980 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7xt\" (UniqueName: \"kubernetes.io/projected/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-kube-api-access-tp7xt\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.248003 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-utilities\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.248724 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-utilities\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.248746 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-catalog-content\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.275128 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7xt\" (UniqueName: \"kubernetes.io/projected/eb3bd86e-7da3-4f5a-bae8-37573493b0f4-kube-api-access-tp7xt\") pod \"community-operators-722hl\" (UID: \"eb3bd86e-7da3-4f5a-bae8-37573493b0f4\") " pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.381747 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:09 crc kubenswrapper[4767]: I0127 17:00:09.949433 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-722hl"] Jan 27 17:00:10 crc kubenswrapper[4767]: I0127 17:00:10.945186 4767 generic.go:334] "Generic (PLEG): container finished" podID="eb3bd86e-7da3-4f5a-bae8-37573493b0f4" containerID="697849791ef94796d246ece5b48e7708c2f3424521b076869a813d4af265fc46" exitCode=0 Jan 27 17:00:10 crc kubenswrapper[4767]: I0127 17:00:10.945326 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-722hl" event={"ID":"eb3bd86e-7da3-4f5a-bae8-37573493b0f4","Type":"ContainerDied","Data":"697849791ef94796d246ece5b48e7708c2f3424521b076869a813d4af265fc46"} Jan 27 17:00:10 crc kubenswrapper[4767]: I0127 17:00:10.945371 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-722hl" event={"ID":"eb3bd86e-7da3-4f5a-bae8-37573493b0f4","Type":"ContainerStarted","Data":"4a140c765ce10645b8bd85230f1a44b4c11d66b639d2508d9b2e95ad26080c94"} Jan 27 17:00:15 crc kubenswrapper[4767]: I0127 17:00:15.986142 4767 generic.go:334] "Generic (PLEG): container finished" podID="eb3bd86e-7da3-4f5a-bae8-37573493b0f4" containerID="54e575c592913e66ff0efa4d77488897abfb0cff583e35a1377280115c2826e4" exitCode=0 Jan 27 17:00:15 crc kubenswrapper[4767]: I0127 17:00:15.986229 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-722hl" event={"ID":"eb3bd86e-7da3-4f5a-bae8-37573493b0f4","Type":"ContainerDied","Data":"54e575c592913e66ff0efa4d77488897abfb0cff583e35a1377280115c2826e4"} Jan 27 17:00:17 crc kubenswrapper[4767]: I0127 17:00:17.004248 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-722hl" event={"ID":"eb3bd86e-7da3-4f5a-bae8-37573493b0f4","Type":"ContainerStarted","Data":"d6439b8cc98740816d20c7e2873cca1938fd08bac697ca01d9ec3170dbc569f5"} Jan 27 17:00:17 crc kubenswrapper[4767]: I0127 17:00:17.036410 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-722hl" podStartSLOduration=3.497058219 podStartE2EDuration="9.036390377s" podCreationTimestamp="2026-01-27 17:00:08 +0000 UTC" firstStartedPulling="2026-01-27 17:00:10.947834918 +0000 UTC m=+4233.336852481" lastFinishedPulling="2026-01-27 17:00:16.487167116 +0000 UTC m=+4238.876184639" observedRunningTime="2026-01-27 17:00:17.028006019 +0000 UTC m=+4239.417023552" watchObservedRunningTime="2026-01-27 17:00:17.036390377 +0000 UTC m=+4239.425407900" Jan 27 17:00:19 crc kubenswrapper[4767]: I0127 17:00:19.383169 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:19 crc kubenswrapper[4767]: I0127 17:00:19.383428 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:19 crc kubenswrapper[4767]: I0127 17:00:19.448378 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:29 crc kubenswrapper[4767]: I0127 17:00:29.439359 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-722hl" Jan 27 17:00:29 crc kubenswrapper[4767]: I0127 17:00:29.535884 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-722hl"] Jan 27 17:00:29 crc kubenswrapper[4767]: I0127 17:00:29.566587 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 17:00:29 crc kubenswrapper[4767]: I0127 17:00:29.567028 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f8p8n" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="registry-server" containerID="cri-o://722337b744114a3aca9547934e467fb44d99773173d98b30654ee41bfff329f8" gracePeriod=2 Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.105083 4767 generic.go:334] "Generic (PLEG): container finished" podID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerID="722337b744114a3aca9547934e467fb44d99773173d98b30654ee41bfff329f8" exitCode=0 Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.105165 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerDied","Data":"722337b744114a3aca9547934e467fb44d99773173d98b30654ee41bfff329f8"} Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.765111 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.854174 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities\") pod \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.854241 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r82l\" (UniqueName: \"kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l\") pod \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.854268 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content\") pod \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\" (UID: \"6d760573-73ce-45c4-bb6c-bb7fad22d7b3\") " Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.855282 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities" (OuterVolumeSpecName: "utilities") pod "6d760573-73ce-45c4-bb6c-bb7fad22d7b3" (UID: "6d760573-73ce-45c4-bb6c-bb7fad22d7b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.860424 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l" (OuterVolumeSpecName: "kube-api-access-9r82l") pod "6d760573-73ce-45c4-bb6c-bb7fad22d7b3" (UID: "6d760573-73ce-45c4-bb6c-bb7fad22d7b3"). InnerVolumeSpecName "kube-api-access-9r82l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.902048 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d760573-73ce-45c4-bb6c-bb7fad22d7b3" (UID: "6d760573-73ce-45c4-bb6c-bb7fad22d7b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.955630 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.955959 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r82l\" (UniqueName: \"kubernetes.io/projected/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-kube-api-access-9r82l\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:30 crc kubenswrapper[4767]: I0127 17:00:30.955969 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d760573-73ce-45c4-bb6c-bb7fad22d7b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.112186 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8p8n" event={"ID":"6d760573-73ce-45c4-bb6c-bb7fad22d7b3","Type":"ContainerDied","Data":"a4bc594e63684d5d6771f29917ec1dbe5a186b1f50230b7d496a248173669c30"} Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.112246 4767 scope.go:117] "RemoveContainer" containerID="722337b744114a3aca9547934e467fb44d99773173d98b30654ee41bfff329f8" Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.112293 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8p8n" Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.395530 4767 scope.go:117] "RemoveContainer" containerID="1edb957e43cb2f1e05f1bcb93e21e3e142f10804f68a6249c457d7d2acc03306" Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.410189 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.417882 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f8p8n"] Jan 27 17:00:31 crc kubenswrapper[4767]: I0127 17:00:31.439405 4767 scope.go:117] "RemoveContainer" containerID="14c69f0b361f371de84c63e7d39079b08488f3411bc9b451f7dd4ae023898076" Jan 27 17:00:32 crc kubenswrapper[4767]: I0127 17:00:32.341631 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" path="/var/lib/kubelet/pods/6d760573-73ce-45c4-bb6c-bb7fad22d7b3/volumes" Jan 27 17:00:43 crc kubenswrapper[4767]: I0127 17:00:43.113291 4767 scope.go:117] "RemoveContainer" containerID="a44602f05d99c71cfa1456833dceb348ad4aec48b3cbfeeaaf6a1b7cb83e53ad" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.520732 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:00:44 crc kubenswrapper[4767]: E0127 17:00:44.521585 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="registry-server" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.521607 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="registry-server" Jan 27 17:00:44 crc kubenswrapper[4767]: E0127 17:00:44.521638 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="extract-content" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.521650 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="extract-content" Jan 27 17:00:44 crc kubenswrapper[4767]: E0127 17:00:44.521667 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="extract-utilities" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.521680 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="extract-utilities" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.521910 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d760573-73ce-45c4-bb6c-bb7fad22d7b3" containerName="registry-server" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.523807 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.533399 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.672605 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjgc4\" (UniqueName: \"kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.672880 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.672900 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.774500 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjgc4\" (UniqueName: \"kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.774551 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.774569 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.775056 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.775082 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.798122 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjgc4\" (UniqueName: \"kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4\") pod \"certified-operators-pmsvw\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:44 crc kubenswrapper[4767]: I0127 17:00:44.848847 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:45 crc kubenswrapper[4767]: I0127 17:00:45.168101 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:00:45 crc kubenswrapper[4767]: I0127 17:00:45.270019 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerStarted","Data":"a4e2e77bdaf3b94f56abd3bc964ef6cba0a376766ed793764b21629fb0412122"} Jan 27 17:00:46 crc kubenswrapper[4767]: I0127 17:00:46.290414 4767 generic.go:334] "Generic (PLEG): container finished" podID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerID="f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861" exitCode=0 Jan 27 17:00:46 crc kubenswrapper[4767]: I0127 17:00:46.290474 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerDied","Data":"f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861"} Jan 27 17:00:46 crc kubenswrapper[4767]: I0127 17:00:46.293564 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:00:48 crc kubenswrapper[4767]: I0127 17:00:48.310099 4767 generic.go:334] "Generic (PLEG): container finished" podID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerID="d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57" exitCode=0 Jan 27 17:00:48 crc kubenswrapper[4767]: I0127 17:00:48.310247 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerDied","Data":"d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57"} Jan 27 17:00:50 crc kubenswrapper[4767]: I0127 17:00:50.354145 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerStarted","Data":"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c"} Jan 27 17:00:50 crc kubenswrapper[4767]: I0127 17:00:50.364644 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pmsvw" podStartSLOduration=3.519128378 podStartE2EDuration="6.36461747s" podCreationTimestamp="2026-01-27 17:00:44 +0000 UTC" firstStartedPulling="2026-01-27 17:00:46.293141346 +0000 UTC m=+4268.682158909" lastFinishedPulling="2026-01-27 17:00:49.138630458 +0000 UTC m=+4271.527648001" observedRunningTime="2026-01-27 17:00:50.356752446 +0000 UTC m=+4272.745769979" watchObservedRunningTime="2026-01-27 17:00:50.36461747 +0000 UTC m=+4272.753635013" Jan 27 17:00:54 crc kubenswrapper[4767]: I0127 17:00:54.849723 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:54 crc kubenswrapper[4767]: I0127 17:00:54.851163 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:54 crc kubenswrapper[4767]: I0127 17:00:54.936321 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:55 crc kubenswrapper[4767]: I0127 17:00:55.438591 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:55 crc kubenswrapper[4767]: I0127 17:00:55.505738 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:00:57 crc kubenswrapper[4767]: I0127 17:00:57.385117 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pmsvw" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="registry-server" containerID="cri-o://945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c" gracePeriod=2 Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.279259 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.378769 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities\") pod \"5128dead-03bd-4aa7-a2e5-d734c9353860\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.378842 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjgc4\" (UniqueName: \"kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4\") pod \"5128dead-03bd-4aa7-a2e5-d734c9353860\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.378911 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content\") pod \"5128dead-03bd-4aa7-a2e5-d734c9353860\" (UID: \"5128dead-03bd-4aa7-a2e5-d734c9353860\") " Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.379593 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities" (OuterVolumeSpecName: "utilities") pod "5128dead-03bd-4aa7-a2e5-d734c9353860" (UID: "5128dead-03bd-4aa7-a2e5-d734c9353860"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.383806 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4" (OuterVolumeSpecName: "kube-api-access-qjgc4") pod "5128dead-03bd-4aa7-a2e5-d734c9353860" (UID: "5128dead-03bd-4aa7-a2e5-d734c9353860"). InnerVolumeSpecName "kube-api-access-qjgc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.403397 4767 generic.go:334] "Generic (PLEG): container finished" podID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerID="945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c" exitCode=0 Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.403451 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerDied","Data":"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c"} Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.403485 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pmsvw" event={"ID":"5128dead-03bd-4aa7-a2e5-d734c9353860","Type":"ContainerDied","Data":"a4e2e77bdaf3b94f56abd3bc964ef6cba0a376766ed793764b21629fb0412122"} Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.403506 4767 scope.go:117] "RemoveContainer" containerID="945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.403479 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pmsvw" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.423223 4767 scope.go:117] "RemoveContainer" containerID="d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.442485 4767 scope.go:117] "RemoveContainer" containerID="f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.475799 4767 scope.go:117] "RemoveContainer" containerID="945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c" Jan 27 17:00:58 crc kubenswrapper[4767]: E0127 17:00:58.478105 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c\": container with ID starting with 945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c not found: ID does not exist" containerID="945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.478185 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c"} err="failed to get container status \"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c\": rpc error: code = NotFound desc = could not find container \"945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c\": container with ID starting with 945e401e67c2cbf82d533911d08d5d10f24fa8b06591b1bd5c342451c828601c not found: ID does not exist" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.478238 4767 scope.go:117] "RemoveContainer" containerID="d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57" Jan 27 17:00:58 crc kubenswrapper[4767]: E0127 17:00:58.478674 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57\": container with ID starting with d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57 not found: ID does not exist" containerID="d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.478749 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57"} err="failed to get container status \"d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57\": rpc error: code = NotFound desc = could not find container \"d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57\": container with ID starting with d201e1e4c9c920d79b7bb0b05b6c0654dff4ba828fc2e1f91f8b1498b8fc9b57 not found: ID does not exist" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.478792 4767 scope.go:117] "RemoveContainer" containerID="f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861" Jan 27 17:00:58 crc kubenswrapper[4767]: E0127 17:00:58.479134 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861\": container with ID starting with f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861 not found: ID does not exist" containerID="f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.479152 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861"} err="failed to get container status \"f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861\": rpc error: code = NotFound desc = could not find container \"f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861\": container with ID starting with f7cd41ff97b057f68e89dc1c195b845a8c1a2329817c45fa35ca540b89341861 not found: ID does not exist" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.481488 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.481520 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjgc4\" (UniqueName: \"kubernetes.io/projected/5128dead-03bd-4aa7-a2e5-d734c9353860-kube-api-access-qjgc4\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.776145 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5128dead-03bd-4aa7-a2e5-d734c9353860" (UID: "5128dead-03bd-4aa7-a2e5-d734c9353860"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[4767]: I0127 17:00:58.786400 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5128dead-03bd-4aa7-a2e5-d734c9353860-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[4767]: I0127 17:00:59.035190 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:00:59 crc kubenswrapper[4767]: I0127 17:00:59.041407 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pmsvw"] Jan 27 17:01:00 crc kubenswrapper[4767]: I0127 17:01:00.340474 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" path="/var/lib/kubelet/pods/5128dead-03bd-4aa7-a2e5-d734c9353860/volumes" Jan 27 17:01:24 crc kubenswrapper[4767]: I0127 17:01:24.857960 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:01:24 crc kubenswrapper[4767]: I0127 17:01:24.858865 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:01:54 crc kubenswrapper[4767]: I0127 17:01:54.857920 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:01:54 crc kubenswrapper[4767]: I0127 17:01:54.858683 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:02:24 crc kubenswrapper[4767]: I0127 17:02:24.858005 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:02:24 crc kubenswrapper[4767]: I0127 17:02:24.858595 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:02:24 crc kubenswrapper[4767]: I0127 17:02:24.858660 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 17:02:24 crc kubenswrapper[4767]: I0127 17:02:24.859559 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:02:24 crc kubenswrapper[4767]: I0127 17:02:24.859659 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892" gracePeriod=600 Jan 27 17:02:25 crc kubenswrapper[4767]: I0127 17:02:25.175081 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892" exitCode=0 Jan 27 17:02:25 crc kubenswrapper[4767]: I0127 17:02:25.175147 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892"} Jan 27 17:02:25 crc kubenswrapper[4767]: I0127 17:02:25.175645 4767 scope.go:117] "RemoveContainer" containerID="901e47f2d31b8c5b5de8f2b0122248fa480d55b2fb49da2cdd832daa6303acea" Jan 27 17:02:26 crc kubenswrapper[4767]: I0127 17:02:26.187133 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758"} Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.565084 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:03:45 crc kubenswrapper[4767]: E0127 17:03:45.566023 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="registry-server" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.566038 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="registry-server" Jan 27 17:03:45 crc kubenswrapper[4767]: E0127 17:03:45.566057 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="extract-utilities" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.566066 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="extract-utilities" Jan 27 17:03:45 crc kubenswrapper[4767]: E0127 17:03:45.566081 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="extract-content" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.566088 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="extract-content" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.566281 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="5128dead-03bd-4aa7-a2e5-d734c9353860" containerName="registry-server" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.567481 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.581600 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.761502 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6ph\" (UniqueName: \"kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.762116 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.762139 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.865072 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6ph\" (UniqueName: \"kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.865159 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.865185 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.867328 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.868001 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:45 crc kubenswrapper[4767]: I0127 17:03:45.894729 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6ph\" (UniqueName: \"kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph\") pod \"redhat-marketplace-vkbsx\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:46 crc kubenswrapper[4767]: I0127 17:03:46.191662 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:46 crc kubenswrapper[4767]: I0127 17:03:46.623810 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:03:47 crc kubenswrapper[4767]: I0127 17:03:47.003071 4767 generic.go:334] "Generic (PLEG): container finished" podID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerID="a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5" exitCode=0 Jan 27 17:03:47 crc kubenswrapper[4767]: I0127 17:03:47.003171 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerDied","Data":"a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5"} Jan 27 17:03:47 crc kubenswrapper[4767]: I0127 17:03:47.003461 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerStarted","Data":"576aa0779ee4b7c76a913b3dc4859ae22888e474d714a386bb928e9046fd6881"} Jan 27 17:03:48 crc kubenswrapper[4767]: I0127 17:03:48.014334 4767 generic.go:334] "Generic (PLEG): container finished" podID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerID="1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d" exitCode=0 Jan 27 17:03:48 crc kubenswrapper[4767]: I0127 17:03:48.014380 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerDied","Data":"1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d"} Jan 27 17:03:50 crc kubenswrapper[4767]: I0127 17:03:50.033276 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerStarted","Data":"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3"} Jan 27 17:03:50 crc kubenswrapper[4767]: I0127 17:03:50.054295 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkbsx" podStartSLOduration=3.430321567 podStartE2EDuration="5.054274555s" podCreationTimestamp="2026-01-27 17:03:45 +0000 UTC" firstStartedPulling="2026-01-27 17:03:47.004560201 +0000 UTC m=+4449.393577724" lastFinishedPulling="2026-01-27 17:03:48.628513179 +0000 UTC m=+4451.017530712" observedRunningTime="2026-01-27 17:03:50.049607392 +0000 UTC m=+4452.438624915" watchObservedRunningTime="2026-01-27 17:03:50.054274555 +0000 UTC m=+4452.443292078" Jan 27 17:03:56 crc kubenswrapper[4767]: I0127 17:03:56.192131 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:56 crc kubenswrapper[4767]: I0127 17:03:56.192726 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:56 crc kubenswrapper[4767]: I0127 17:03:56.249099 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:57 crc kubenswrapper[4767]: I0127 17:03:57.165512 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:03:57 crc kubenswrapper[4767]: I0127 17:03:57.227013 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:03:59 crc kubenswrapper[4767]: I0127 17:03:59.110672 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkbsx" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="registry-server" containerID="cri-o://6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3" gracePeriod=2 Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.101277 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.133677 4767 generic.go:334] "Generic (PLEG): container finished" podID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerID="6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3" exitCode=0 Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.133764 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerDied","Data":"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3"} Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.133823 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbsx" event={"ID":"c5705ff5-aafc-43bc-99a8-251e9e02caeb","Type":"ContainerDied","Data":"576aa0779ee4b7c76a913b3dc4859ae22888e474d714a386bb928e9046fd6881"} Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.133863 4767 scope.go:117] "RemoveContainer" containerID="6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.134382 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbsx" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.167831 4767 scope.go:117] "RemoveContainer" containerID="1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.191672 4767 scope.go:117] "RemoveContainer" containerID="a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.212660 4767 scope.go:117] "RemoveContainer" containerID="6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3" Jan 27 17:04:00 crc kubenswrapper[4767]: E0127 17:04:00.213692 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3\": container with ID starting with 6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3 not found: ID does not exist" containerID="6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.213779 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3"} err="failed to get container status \"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3\": rpc error: code = NotFound desc = could not find container \"6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3\": container with ID starting with 6b47425d57d1fe9841d4388d69726fdd662baa104f5b326f8c7e68b5e59ea4e3 not found: ID does not exist" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.213809 4767 scope.go:117] "RemoveContainer" containerID="1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d" Jan 27 17:04:00 crc kubenswrapper[4767]: E0127 17:04:00.214253 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d\": container with ID starting with 1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d not found: ID does not exist" containerID="1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.214322 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d"} err="failed to get container status \"1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d\": rpc error: code = NotFound desc = could not find container \"1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d\": container with ID starting with 1dba5bd9d11feaf1a59c257c1a744b8ef68ad522fa20ff21c480d3b0866e041d not found: ID does not exist" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.214385 4767 scope.go:117] "RemoveContainer" containerID="a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5" Jan 27 17:04:00 crc kubenswrapper[4767]: E0127 17:04:00.214894 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5\": container with ID starting with a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5 not found: ID does not exist" containerID="a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.214926 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5"} err="failed to get container status \"a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5\": rpc error: code = NotFound desc = could not find container \"a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5\": container with ID starting with a7513cd0f5e582c1111c4625a29407d93d225abeebfdd3917d2b94f2426a79b5 not found: ID does not exist" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.290455 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities\") pod \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.290613 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content\") pod \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.290669 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln6ph\" (UniqueName: \"kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph\") pod \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\" (UID: \"c5705ff5-aafc-43bc-99a8-251e9e02caeb\") " Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.291422 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities" (OuterVolumeSpecName: "utilities") pod "c5705ff5-aafc-43bc-99a8-251e9e02caeb" (UID: "c5705ff5-aafc-43bc-99a8-251e9e02caeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.300027 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph" (OuterVolumeSpecName: "kube-api-access-ln6ph") pod "c5705ff5-aafc-43bc-99a8-251e9e02caeb" (UID: "c5705ff5-aafc-43bc-99a8-251e9e02caeb"). InnerVolumeSpecName "kube-api-access-ln6ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.317249 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5705ff5-aafc-43bc-99a8-251e9e02caeb" (UID: "c5705ff5-aafc-43bc-99a8-251e9e02caeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.392936 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.392988 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5705ff5-aafc-43bc-99a8-251e9e02caeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.393094 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln6ph\" (UniqueName: \"kubernetes.io/projected/c5705ff5-aafc-43bc-99a8-251e9e02caeb-kube-api-access-ln6ph\") on node \"crc\" DevicePath \"\"" Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.463150 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:04:00 crc kubenswrapper[4767]: I0127 17:04:00.473091 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbsx"] Jan 27 17:04:02 crc kubenswrapper[4767]: I0127 17:04:02.362122 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" path="/var/lib/kubelet/pods/c5705ff5-aafc-43bc-99a8-251e9e02caeb/volumes" Jan 27 17:04:54 crc kubenswrapper[4767]: I0127 17:04:54.857482 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:04:54 crc kubenswrapper[4767]: I0127 17:04:54.858053 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:05:24 crc kubenswrapper[4767]: I0127 17:05:24.858002 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:05:24 crc kubenswrapper[4767]: I0127 17:05:24.858576 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.749390 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:05:40 crc kubenswrapper[4767]: E0127 17:05:40.750417 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="extract-content" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.750441 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="extract-content" Jan 27 17:05:40 crc kubenswrapper[4767]: E0127 17:05:40.750457 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="extract-utilities" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.750468 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="extract-utilities" Jan 27 17:05:40 crc kubenswrapper[4767]: E0127 17:05:40.750507 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="registry-server" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.750519 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="registry-server" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.750739 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5705ff5-aafc-43bc-99a8-251e9e02caeb" containerName="registry-server" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.752319 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.777618 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.779118 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.779669 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmmdw\" (UniqueName: \"kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.780019 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.881368 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.881478 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.881506 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmmdw\" (UniqueName: \"kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.881855 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.881963 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:40 crc kubenswrapper[4767]: I0127 17:05:40.904292 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmmdw\" (UniqueName: \"kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw\") pod \"redhat-operators-ppr69\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:41 crc kubenswrapper[4767]: I0127 17:05:41.088984 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:41 crc kubenswrapper[4767]: I0127 17:05:41.512353 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:05:42 crc kubenswrapper[4767]: I0127 17:05:42.022170 4767 generic.go:334] "Generic (PLEG): container finished" podID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerID="40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d" exitCode=0 Jan 27 17:05:42 crc kubenswrapper[4767]: I0127 17:05:42.022240 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerDied","Data":"40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d"} Jan 27 17:05:42 crc kubenswrapper[4767]: I0127 17:05:42.022286 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerStarted","Data":"aa117ff4de59c6bcb7ef48b9c0a7f68b784a970dd7c5834fccc438ad8317f2d7"} Jan 27 17:05:43 crc kubenswrapper[4767]: I0127 17:05:43.042575 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerStarted","Data":"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030"} Jan 27 17:05:44 crc kubenswrapper[4767]: I0127 17:05:44.055333 4767 generic.go:334] "Generic (PLEG): container finished" podID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerID="c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030" exitCode=0 Jan 27 17:05:44 crc kubenswrapper[4767]: I0127 17:05:44.055546 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerDied","Data":"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030"} Jan 27 17:05:45 crc kubenswrapper[4767]: I0127 17:05:45.065500 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerStarted","Data":"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382"} Jan 27 17:05:45 crc kubenswrapper[4767]: I0127 17:05:45.087797 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ppr69" podStartSLOduration=2.60228743 podStartE2EDuration="5.087775638s" podCreationTimestamp="2026-01-27 17:05:40 +0000 UTC" firstStartedPulling="2026-01-27 17:05:42.023681311 +0000 UTC m=+4564.412698834" lastFinishedPulling="2026-01-27 17:05:44.509169519 +0000 UTC m=+4566.898187042" observedRunningTime="2026-01-27 17:05:45.082316283 +0000 UTC m=+4567.471333816" watchObservedRunningTime="2026-01-27 17:05:45.087775638 +0000 UTC m=+4567.476793151" Jan 27 17:05:51 crc kubenswrapper[4767]: I0127 17:05:51.089978 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:51 crc kubenswrapper[4767]: I0127 17:05:51.090546 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:05:52 crc kubenswrapper[4767]: I0127 17:05:52.138931 4767 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ppr69" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="registry-server" probeResult="failure" output=< Jan 27 17:05:52 crc kubenswrapper[4767]: timeout: failed to connect service ":50051" within 1s Jan 27 17:05:52 crc kubenswrapper[4767]: > Jan 27 17:05:54 crc kubenswrapper[4767]: I0127 17:05:54.857627 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:05:54 crc kubenswrapper[4767]: I0127 17:05:54.858098 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:05:54 crc kubenswrapper[4767]: I0127 17:05:54.858169 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 17:05:54 crc kubenswrapper[4767]: I0127 17:05:54.859190 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:05:54 crc kubenswrapper[4767]: I0127 17:05:54.859328 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" gracePeriod=600 Jan 27 17:05:54 crc kubenswrapper[4767]: E0127 17:05:54.981876 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:05:55 crc kubenswrapper[4767]: I0127 17:05:55.164466 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" exitCode=0 Jan 27 17:05:55 crc kubenswrapper[4767]: I0127 17:05:55.164530 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758"} Jan 27 17:05:55 crc kubenswrapper[4767]: I0127 17:05:55.164575 4767 scope.go:117] "RemoveContainer" containerID="67cf98cb7edba4ebdfb2b59dde0236a61621380d9f946b679e711de60d99f892" Jan 27 17:05:55 crc kubenswrapper[4767]: I0127 17:05:55.165526 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:05:55 crc kubenswrapper[4767]: E0127 17:05:55.166384 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:06:01 crc kubenswrapper[4767]: I0127 17:06:01.130454 4767 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:06:01 crc kubenswrapper[4767]: I0127 17:06:01.178168 4767 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:06:01 crc kubenswrapper[4767]: I0127 17:06:01.380780 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.224246 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ppr69" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="registry-server" containerID="cri-o://d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382" gracePeriod=2 Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.730157 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.872606 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content\") pod \"834af50e-2c1b-4390-84c8-39d3f69f2693\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.872707 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities\") pod \"834af50e-2c1b-4390-84c8-39d3f69f2693\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.872770 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmmdw\" (UniqueName: \"kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw\") pod \"834af50e-2c1b-4390-84c8-39d3f69f2693\" (UID: \"834af50e-2c1b-4390-84c8-39d3f69f2693\") " Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.874312 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities" (OuterVolumeSpecName: "utilities") pod "834af50e-2c1b-4390-84c8-39d3f69f2693" (UID: "834af50e-2c1b-4390-84c8-39d3f69f2693"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.881544 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw" (OuterVolumeSpecName: "kube-api-access-rmmdw") pod "834af50e-2c1b-4390-84c8-39d3f69f2693" (UID: "834af50e-2c1b-4390-84c8-39d3f69f2693"). InnerVolumeSpecName "kube-api-access-rmmdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.975267 4767 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:02 crc kubenswrapper[4767]: I0127 17:06:02.975316 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmmdw\" (UniqueName: \"kubernetes.io/projected/834af50e-2c1b-4390-84c8-39d3f69f2693-kube-api-access-rmmdw\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.038265 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "834af50e-2c1b-4390-84c8-39d3f69f2693" (UID: "834af50e-2c1b-4390-84c8-39d3f69f2693"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.076729 4767 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/834af50e-2c1b-4390-84c8-39d3f69f2693-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.232101 4767 generic.go:334] "Generic (PLEG): container finished" podID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerID="d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382" exitCode=0 Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.232164 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppr69" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.232237 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerDied","Data":"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382"} Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.232943 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppr69" event={"ID":"834af50e-2c1b-4390-84c8-39d3f69f2693","Type":"ContainerDied","Data":"aa117ff4de59c6bcb7ef48b9c0a7f68b784a970dd7c5834fccc438ad8317f2d7"} Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.232970 4767 scope.go:117] "RemoveContainer" containerID="d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.267415 4767 scope.go:117] "RemoveContainer" containerID="c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.274052 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.281495 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ppr69"] Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.296664 4767 scope.go:117] "RemoveContainer" containerID="40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.339689 4767 scope.go:117] "RemoveContainer" containerID="d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382" Jan 27 17:06:03 crc kubenswrapper[4767]: E0127 17:06:03.340235 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382\": container with ID starting with d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382 not found: ID does not exist" containerID="d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.340279 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382"} err="failed to get container status \"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382\": rpc error: code = NotFound desc = could not find container \"d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382\": container with ID starting with d75458c32d0bc3403d8358b685b18ead970b5ee21aad22a047aae01fda938382 not found: ID does not exist" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.340310 4767 scope.go:117] "RemoveContainer" containerID="c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030" Jan 27 17:06:03 crc kubenswrapper[4767]: E0127 17:06:03.347773 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030\": container with ID starting with c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030 not found: ID does not exist" containerID="c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.348041 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030"} err="failed to get container status \"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030\": rpc error: code = NotFound desc = could not find container \"c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030\": container with ID starting with c5c7266093fb63b11a690a5122cb30b631465d2191ef7c6ebb120ce4f5dfa030 not found: ID does not exist" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.348142 4767 scope.go:117] "RemoveContainer" containerID="40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d" Jan 27 17:06:03 crc kubenswrapper[4767]: E0127 17:06:03.348749 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d\": container with ID starting with 40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d not found: ID does not exist" containerID="40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d" Jan 27 17:06:03 crc kubenswrapper[4767]: I0127 17:06:03.348793 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d"} err="failed to get container status \"40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d\": rpc error: code = NotFound desc = could not find container \"40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d\": container with ID starting with 40f8494ace2d3761cc103a6a7e655edf9fac0cf8b344d623526106776e587e6d not found: ID does not exist" Jan 27 17:06:04 crc kubenswrapper[4767]: I0127 17:06:04.349624 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" path="/var/lib/kubelet/pods/834af50e-2c1b-4390-84c8-39d3f69f2693/volumes" Jan 27 17:06:08 crc kubenswrapper[4767]: I0127 17:06:08.330992 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:06:08 crc kubenswrapper[4767]: E0127 17:06:08.331602 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:06:20 crc kubenswrapper[4767]: I0127 17:06:20.325416 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:06:20 crc kubenswrapper[4767]: E0127 17:06:20.326238 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:06:35 crc kubenswrapper[4767]: I0127 17:06:35.325624 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:06:35 crc kubenswrapper[4767]: E0127 17:06:35.326577 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:06:47 crc kubenswrapper[4767]: I0127 17:06:47.326138 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:06:47 crc kubenswrapper[4767]: E0127 17:06:47.327240 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:06:58 crc kubenswrapper[4767]: I0127 17:06:58.329435 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:06:58 crc kubenswrapper[4767]: E0127 17:06:58.330161 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:07:11 crc kubenswrapper[4767]: I0127 17:07:11.325918 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:07:11 crc kubenswrapper[4767]: E0127 17:07:11.327068 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:07:26 crc kubenswrapper[4767]: I0127 17:07:26.325417 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:07:26 crc kubenswrapper[4767]: E0127 17:07:26.326122 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:07:39 crc kubenswrapper[4767]: I0127 17:07:39.326087 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:07:39 crc kubenswrapper[4767]: E0127 17:07:39.326856 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:07:51 crc kubenswrapper[4767]: I0127 17:07:51.325427 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:07:51 crc kubenswrapper[4767]: E0127 17:07:51.326443 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:08:06 crc kubenswrapper[4767]: I0127 17:08:06.326190 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:08:06 crc kubenswrapper[4767]: E0127 17:08:06.327075 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:08:21 crc kubenswrapper[4767]: I0127 17:08:21.326189 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:08:21 crc kubenswrapper[4767]: E0127 17:08:21.327138 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:08:35 crc kubenswrapper[4767]: I0127 17:08:35.325560 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:08:35 crc kubenswrapper[4767]: E0127 17:08:35.326607 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:08:46 crc kubenswrapper[4767]: I0127 17:08:46.327937 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:08:46 crc kubenswrapper[4767]: E0127 17:08:46.328823 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:08:57 crc kubenswrapper[4767]: I0127 17:08:57.325513 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:08:57 crc kubenswrapper[4767]: E0127 17:08:57.326576 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:09:12 crc kubenswrapper[4767]: I0127 17:09:12.326243 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:09:12 crc kubenswrapper[4767]: E0127 17:09:12.327311 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:09:25 crc kubenswrapper[4767]: I0127 17:09:25.326297 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:09:25 crc kubenswrapper[4767]: E0127 17:09:25.327067 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:09:40 crc kubenswrapper[4767]: I0127 17:09:40.326027 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:09:40 crc kubenswrapper[4767]: E0127 17:09:40.327014 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:09:52 crc kubenswrapper[4767]: I0127 17:09:52.326441 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:09:52 crc kubenswrapper[4767]: E0127 17:09:52.327720 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.528673 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gz2vf/must-gather-dq54j"] Jan 27 17:10:00 crc kubenswrapper[4767]: E0127 17:10:00.529532 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="extract-content" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.529547 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="extract-content" Jan 27 17:10:00 crc kubenswrapper[4767]: E0127 17:10:00.529556 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="registry-server" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.529562 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="registry-server" Jan 27 17:10:00 crc kubenswrapper[4767]: E0127 17:10:00.529583 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="extract-utilities" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.529588 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="extract-utilities" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.529760 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="834af50e-2c1b-4390-84c8-39d3f69f2693" containerName="registry-server" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.530599 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.532957 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gz2vf"/"kube-root-ca.crt" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.533189 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gz2vf"/"openshift-service-ca.crt" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.536461 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gz2vf"/"default-dockercfg-x2cw7" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.538014 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gz2vf/must-gather-dq54j"] Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.615592 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.615688 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554l6\" (UniqueName: \"kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.717034 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-554l6\" (UniqueName: \"kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.717141 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.717815 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:00 crc kubenswrapper[4767]: I0127 17:10:00.861763 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-554l6\" (UniqueName: \"kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6\") pod \"must-gather-dq54j\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:01 crc kubenswrapper[4767]: I0127 17:10:01.145691 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:10:01 crc kubenswrapper[4767]: I0127 17:10:01.639601 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gz2vf/must-gather-dq54j"] Jan 27 17:10:01 crc kubenswrapper[4767]: W0127 17:10:01.653407 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308c1299_e10d_4cbe_a77e_8bd11de554bf.slice/crio-ebf91ca78851e5b16512f929906e6eb94a7087d73a4f3529ff5fe84f71108350 WatchSource:0}: Error finding container ebf91ca78851e5b16512f929906e6eb94a7087d73a4f3529ff5fe84f71108350: Status 404 returned error can't find the container with id ebf91ca78851e5b16512f929906e6eb94a7087d73a4f3529ff5fe84f71108350 Jan 27 17:10:01 crc kubenswrapper[4767]: I0127 17:10:01.656419 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:10:02 crc kubenswrapper[4767]: I0127 17:10:02.265728 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gz2vf/must-gather-dq54j" event={"ID":"308c1299-e10d-4cbe-a77e-8bd11de554bf","Type":"ContainerStarted","Data":"ebf91ca78851e5b16512f929906e6eb94a7087d73a4f3529ff5fe84f71108350"} Jan 27 17:10:05 crc kubenswrapper[4767]: I0127 17:10:05.325809 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:10:05 crc kubenswrapper[4767]: E0127 17:10:05.326628 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:10:08 crc kubenswrapper[4767]: I0127 17:10:08.310360 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gz2vf/must-gather-dq54j" event={"ID":"308c1299-e10d-4cbe-a77e-8bd11de554bf","Type":"ContainerStarted","Data":"d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963"} Jan 27 17:10:08 crc kubenswrapper[4767]: I0127 17:10:08.310792 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gz2vf/must-gather-dq54j" event={"ID":"308c1299-e10d-4cbe-a77e-8bd11de554bf","Type":"ContainerStarted","Data":"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe"} Jan 27 17:10:08 crc kubenswrapper[4767]: I0127 17:10:08.333239 4767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gz2vf/must-gather-dq54j" podStartSLOduration=2.43978208 podStartE2EDuration="8.333181904s" podCreationTimestamp="2026-01-27 17:10:00 +0000 UTC" firstStartedPulling="2026-01-27 17:10:01.655979491 +0000 UTC m=+4824.044997034" lastFinishedPulling="2026-01-27 17:10:07.549379315 +0000 UTC m=+4829.938396858" observedRunningTime="2026-01-27 17:10:08.323569641 +0000 UTC m=+4830.712587174" watchObservedRunningTime="2026-01-27 17:10:08.333181904 +0000 UTC m=+4830.722199437" Jan 27 17:10:16 crc kubenswrapper[4767]: I0127 17:10:16.326079 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:10:16 crc kubenswrapper[4767]: E0127 17:10:16.326842 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:10:31 crc kubenswrapper[4767]: I0127 17:10:31.325911 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:10:31 crc kubenswrapper[4767]: E0127 17:10:31.326601 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:10:42 crc kubenswrapper[4767]: I0127 17:10:42.326374 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:10:42 crc kubenswrapper[4767]: E0127 17:10:42.327504 4767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mrkmx_openshift-machine-config-operator(6f3fb7f5-2925-4714-9e7b-44749885b298)\"" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" Jan 27 17:10:56 crc kubenswrapper[4767]: I0127 17:10:56.325721 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:10:56 crc kubenswrapper[4767]: I0127 17:10:56.639716 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"188e83ef0c8d51c542680fa9443636a09f171d2a14958392a94f0afbafab5ca9"} Jan 27 17:11:14 crc kubenswrapper[4767]: I0127 17:11:14.595127 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/util/0.log" Jan 27 17:11:14 crc kubenswrapper[4767]: I0127 17:11:14.740370 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/util/0.log" Jan 27 17:11:14 crc kubenswrapper[4767]: I0127 17:11:14.820126 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/pull/0.log" Jan 27 17:11:14 crc kubenswrapper[4767]: I0127 17:11:14.827104 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/pull/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.022667 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/util/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.051366 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/pull/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.056078 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ec9a6b6e11cc1d4fb03879d71519b9c54d1217e3ddcc789706296d216qbfjm_463eb2d8-4b46-4847-af23-df7d867fb2f6/extract/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.205547 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-2p8b9_4b4d49ca-1e76-4d5a-8205-cdb44f6afa01/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.288250 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-nn275_d1f0c156-6150-435c-afc4-224f4f72a0e2/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.383061 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-zhm6x_916825ff-c27d-4760-92bc-4adb7dc12ca2/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.461375 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-vrwcs_6a1435fd-9fab-4f48-a588-d8ae2aa1e120/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.581730 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-lwfvm_a8181a54-8433-4343-84b5-f32f6f80f0d6/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.801496 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-7vc2f_aa44bde6-467e-42ef-b797-851ee0f87a12/manager/0.log" Jan 27 17:11:15 crc kubenswrapper[4767]: I0127 17:11:15.887409 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-8ksrz_e093bca8-5087-47cd-a9af-719248b96d6d/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.009843 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-l8nss_fe7ca101-b6f4-4733-a896-a9d203cc4bc0/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.083066 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-j7sdn_f7123cb1-dbea-42fd-abba-970911e37f5f/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.183898 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-xf9r5_3ae8a5b5-c9f9-4130-af1e-721617d5c204/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.249247 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-g6bgj_4c6965aa-5607-4647-a78f-eb708720424e/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.432553 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-xfptk_9df4e8b9-adcf-4442-a0db-70f45bf9977d/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.457251 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-94z5v_61bd36e7-2117-44d2-86e5-62a7d776434e/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.592975 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-dzhk4_a5437c7a-1810-4e6f-9db6-22cc39f0c744/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.635714 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85489vr8_355ddf1c-4f8e-45ee-8f68-af3d0b4feb51/manager/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.886871 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-65bf5cdd75-dqrvx_e436713f-4d09-4773-ac32-f3ea6741be35/operator/0.log" Jan 27 17:11:16 crc kubenswrapper[4767]: I0127 17:11:16.911683 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79b75b7c86-j2gvn_8eade9eb-ffdd-43c3-b9ac-5522bc2218b8/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.053791 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-lmbvl_ca30eebc-8930-44d6-8a4b-deea9c5dbe56/registry-server/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.103521 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-hpg68_5a8ec2b4-9702-46de-bcf4-07bc2fe036e1/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.245409 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qvnm9_6822744e-5d47-466b-9846-88e9c68a3aeb/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.297687 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zm4bn_057f1cf6-9e40-400e-aaa7-9acd79d01c3d/operator/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.416167 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-tf9jw_203038ea-e296-4f7e-8228-015aee5ec061/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.488616 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-4vw6t_72144a58-97b0-4150-8198-e9d8f8b0fa7e/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.826918 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-gg6cd_666b02fc-6a23-437c-b606-66ba995cd3d6/manager/0.log" Jan 27 17:11:17 crc kubenswrapper[4767]: I0127 17:11:17.870179 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-66576874d7-z5wns_9d11f824-e923-46e5-958a-f42f9c5504ef/manager/0.log" Jan 27 17:11:35 crc kubenswrapper[4767]: I0127 17:11:35.892457 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7gdqg_dea7593b-32bb-4d48-b47a-2cf9aa0d4185/control-plane-machine-set-operator/0.log" Jan 27 17:11:36 crc kubenswrapper[4767]: I0127 17:11:36.075396 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-69lb2_56755333-86a4-4a45-b49a-c518575ad5f0/kube-rbac-proxy/0.log" Jan 27 17:11:36 crc kubenswrapper[4767]: I0127 17:11:36.076273 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-69lb2_56755333-86a4-4a45-b49a-c518575ad5f0/machine-api-operator/0.log" Jan 27 17:11:48 crc kubenswrapper[4767]: I0127 17:11:48.908759 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-52dbj_5b8d7fa4-0160-4913-a000-6236ad4dd951/cert-manager-controller/0.log" Jan 27 17:11:49 crc kubenswrapper[4767]: I0127 17:11:49.062789 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wsnfv_85b4a02d-b650-4c41-92a8-694ac0e43340/cert-manager-cainjector/0.log" Jan 27 17:11:49 crc kubenswrapper[4767]: I0127 17:11:49.171079 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-w8lk4_3ed96d47-389c-4c3d-a118-21c6ba90b4db/cert-manager-webhook/0.log" Jan 27 17:12:02 crc kubenswrapper[4767]: I0127 17:12:02.747281 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mzlgz_2c3d4579-619c-4e0a-b802-067688bc9a2f/nmstate-console-plugin/0.log" Jan 27 17:12:02 crc kubenswrapper[4767]: I0127 17:12:02.914118 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-czz6l_51f8969c-3967-4f5f-b101-94e942f01395/nmstate-handler/0.log" Jan 27 17:12:02 crc kubenswrapper[4767]: I0127 17:12:02.957358 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6hsq7_74feff31-d5c9-4aa8-8789-95a64e2811e5/kube-rbac-proxy/0.log" Jan 27 17:12:03 crc kubenswrapper[4767]: I0127 17:12:03.047499 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6hsq7_74feff31-d5c9-4aa8-8789-95a64e2811e5/nmstate-metrics/0.log" Jan 27 17:12:03 crc kubenswrapper[4767]: I0127 17:12:03.130884 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pktvq_cf7efc5c-a9e6-4d13-aacb-e4f0d2da2abd/nmstate-operator/0.log" Jan 27 17:12:03 crc kubenswrapper[4767]: I0127 17:12:03.208542 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rlr2t_ec8ec347-f0ff-4091-a020-c69c4d4d9bda/nmstate-webhook/0.log" Jan 27 17:12:16 crc kubenswrapper[4767]: I0127 17:12:16.377301 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bqzvj_f62bd883-1c36-4ad3-973c-ab9aadf07f1d/prometheus-operator/0.log" Jan 27 17:12:16 crc kubenswrapper[4767]: I0127 17:12:16.501336 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb_4354c097-733d-43f2-a75f-84763c81d018/prometheus-operator-admission-webhook/0.log" Jan 27 17:12:16 crc kubenswrapper[4767]: I0127 17:12:16.539390 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg_0ed4e2f4-af9f-489a-94ac-d408167207a6/prometheus-operator-admission-webhook/0.log" Jan 27 17:12:16 crc kubenswrapper[4767]: I0127 17:12:16.694903 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dwt87_b10d2607-d09e-4025-92a6-9eeb1d37f536/operator/0.log" Jan 27 17:12:16 crc kubenswrapper[4767]: I0127 17:12:16.734604 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8vdc5_ae225e20-7835-4f58-abe2-12416dfabe72/perses-operator/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.162905 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-fs7mv_fff14b52-109f-4cc6-9361-a577bdcfb615/kube-rbac-proxy/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.291561 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-fs7mv_fff14b52-109f-4cc6-9361-a577bdcfb615/controller/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.399074 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-frr-files/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.568758 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-metrics/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.569434 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-reloader/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.575923 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-frr-files/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.580292 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-reloader/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.741392 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-metrics/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.765606 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-reloader/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.829007 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-metrics/0.log" Jan 27 17:12:30 crc kubenswrapper[4767]: I0127 17:12:30.945337 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-frr-files/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.148332 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-frr-files/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.182281 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-metrics/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.189297 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/cp-reloader/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.196702 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/controller/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.387809 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/frr-metrics/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.424752 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/kube-rbac-proxy/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.482068 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/kube-rbac-proxy-frr/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.599403 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/reloader/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.678111 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k4csc_bda29c78-cbb6-4943-a4fa-d0d7ef8ca64d/frr/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.698958 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6gj67_0ec154cf-23c7-4b7f-acc3-33c56d7e4cae/frr-k8s-webhook-server/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.885025 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-844f686c44-k5sth_9fe0bf56-5fc0-4fbf-a0e5-a372cb365905/manager/0.log" Jan 27 17:12:31 crc kubenswrapper[4767]: I0127 17:12:31.889890 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-596bfd7f57-hl2fj_0a13e440-ff73-4cd7-9759-6ec6c9f7779c/webhook-server/0.log" Jan 27 17:12:32 crc kubenswrapper[4767]: I0127 17:12:32.060447 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cphkn_0974a95e-83af-4ab7-95de-b7ea1211884f/kube-rbac-proxy/0.log" Jan 27 17:12:32 crc kubenswrapper[4767]: I0127 17:12:32.219720 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cphkn_0974a95e-83af-4ab7-95de-b7ea1211884f/speaker/0.log" Jan 27 17:12:45 crc kubenswrapper[4767]: I0127 17:12:45.961464 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.154228 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.205753 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.208398 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.388112 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.390705 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.406791 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5rp7n_4b999b3a-a946-45fe-8601-ed762f22e5c1/extract/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.593115 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.735647 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.755091 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.760882 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.941620 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/util/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.955391 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/pull/0.log" Jan 27 17:12:46 crc kubenswrapper[4767]: I0127 17:12:46.956334 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f2rq2_47ddc6ba-7e6d-4e0b-b899-e30cd3e0de36/extract/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.119496 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/util/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.277538 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/pull/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.282395 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/util/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.284357 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/pull/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.436946 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/util/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.452190 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/pull/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.496323 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089njhc_8c66df55-20ac-4827-b531-7284399769c1/extract/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.636710 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-utilities/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.790848 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-utilities/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.796264 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-content/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.832571 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-content/0.log" Jan 27 17:12:47 crc kubenswrapper[4767]: I0127 17:12:47.979876 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-utilities/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.021637 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/extract-content/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.221007 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-utilities/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.404108 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-content/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.429778 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-utilities/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.444923 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-content/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.657683 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-content/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.657731 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/extract-utilities/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.698980 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pp2kc_6e9e5c7b-5521-4815-9f8d-8de92c9fce65/registry-server/0.log" Jan 27 17:12:48 crc kubenswrapper[4767]: I0127 17:12:48.840321 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-722hl_eb3bd86e-7da3-4f5a-bae8-37573493b0f4/registry-server/0.log" Jan 27 17:12:49 crc kubenswrapper[4767]: I0127 17:12:49.022245 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wblwm_a342ddeb-bdff-452a-966d-5460a1c5f924/marketplace-operator/0.log" Jan 27 17:12:49 crc kubenswrapper[4767]: I0127 17:12:49.127229 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.066603 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-content/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.158374 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-content/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.173695 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.283114 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-content/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.283898 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.412237 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96zhx_a3c62726-f5dc-452a-9284-63a4d82ba2c4/registry-server/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.419970 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.543983 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-content/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.549783 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.601023 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-content/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.742222 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-utilities/0.log" Jan 27 17:12:50 crc kubenswrapper[4767]: I0127 17:12:50.760482 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/extract-content/0.log" Jan 27 17:12:51 crc kubenswrapper[4767]: I0127 17:12:51.380820 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x8k6k_0d786c99-0af9-45d4-af0f-2568df55af59/registry-server/0.log" Jan 27 17:13:03 crc kubenswrapper[4767]: I0127 17:13:03.654055 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f64d74f7b-2zchb_4354c097-733d-43f2-a75f-84763c81d018/prometheus-operator-admission-webhook/0.log" Jan 27 17:13:03 crc kubenswrapper[4767]: I0127 17:13:03.662014 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bqzvj_f62bd883-1c36-4ad3-973c-ab9aadf07f1d/prometheus-operator/0.log" Jan 27 17:13:03 crc kubenswrapper[4767]: I0127 17:13:03.662414 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f64d74f7b-bp7qg_0ed4e2f4-af9f-489a-94ac-d408167207a6/prometheus-operator-admission-webhook/0.log" Jan 27 17:13:03 crc kubenswrapper[4767]: I0127 17:13:03.856163 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dwt87_b10d2607-d09e-4025-92a6-9eeb1d37f536/operator/0.log" Jan 27 17:13:03 crc kubenswrapper[4767]: I0127 17:13:03.878147 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8vdc5_ae225e20-7835-4f58-abe2-12416dfabe72/perses-operator/0.log" Jan 27 17:13:24 crc kubenswrapper[4767]: I0127 17:13:24.858406 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:13:24 crc kubenswrapper[4767]: I0127 17:13:24.858914 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:13:54 crc kubenswrapper[4767]: I0127 17:13:54.858273 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:13:54 crc kubenswrapper[4767]: I0127 17:13:54.858804 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:13:59 crc kubenswrapper[4767]: I0127 17:13:59.104026 4767 generic.go:334] "Generic (PLEG): container finished" podID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerID="58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe" exitCode=0 Jan 27 17:13:59 crc kubenswrapper[4767]: I0127 17:13:59.104137 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gz2vf/must-gather-dq54j" event={"ID":"308c1299-e10d-4cbe-a77e-8bd11de554bf","Type":"ContainerDied","Data":"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe"} Jan 27 17:13:59 crc kubenswrapper[4767]: I0127 17:13:59.105670 4767 scope.go:117] "RemoveContainer" containerID="58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe" Jan 27 17:13:59 crc kubenswrapper[4767]: I0127 17:13:59.920484 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gz2vf_must-gather-dq54j_308c1299-e10d-4cbe-a77e-8bd11de554bf/gather/0.log" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.198458 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gz2vf/must-gather-dq54j"] Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.199104 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gz2vf/must-gather-dq54j" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="copy" containerID="cri-o://d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963" gracePeriod=2 Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.210006 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gz2vf/must-gather-dq54j"] Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.654549 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gz2vf_must-gather-dq54j_308c1299-e10d-4cbe-a77e-8bd11de554bf/copy/0.log" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.655230 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.667129 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output\") pod \"308c1299-e10d-4cbe-a77e-8bd11de554bf\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.667269 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-554l6\" (UniqueName: \"kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6\") pod \"308c1299-e10d-4cbe-a77e-8bd11de554bf\" (UID: \"308c1299-e10d-4cbe-a77e-8bd11de554bf\") " Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.673130 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6" (OuterVolumeSpecName: "kube-api-access-554l6") pod "308c1299-e10d-4cbe-a77e-8bd11de554bf" (UID: "308c1299-e10d-4cbe-a77e-8bd11de554bf"). InnerVolumeSpecName "kube-api-access-554l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.761786 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "308c1299-e10d-4cbe-a77e-8bd11de554bf" (UID: "308c1299-e10d-4cbe-a77e-8bd11de554bf"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.769629 4767 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/308c1299-e10d-4cbe-a77e-8bd11de554bf-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:08 crc kubenswrapper[4767]: I0127 17:14:08.769667 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-554l6\" (UniqueName: \"kubernetes.io/projected/308c1299-e10d-4cbe-a77e-8bd11de554bf-kube-api-access-554l6\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.194794 4767 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gz2vf_must-gather-dq54j_308c1299-e10d-4cbe-a77e-8bd11de554bf/copy/0.log" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.195213 4767 generic.go:334] "Generic (PLEG): container finished" podID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerID="d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963" exitCode=143 Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.195311 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gz2vf/must-gather-dq54j" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.195321 4767 scope.go:117] "RemoveContainer" containerID="d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.215820 4767 scope.go:117] "RemoveContainer" containerID="58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.276454 4767 scope.go:117] "RemoveContainer" containerID="d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963" Jan 27 17:14:09 crc kubenswrapper[4767]: E0127 17:14:09.277329 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963\": container with ID starting with d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963 not found: ID does not exist" containerID="d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.277396 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963"} err="failed to get container status \"d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963\": rpc error: code = NotFound desc = could not find container \"d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963\": container with ID starting with d64cf434959f7ca30571be3630c33c24571cdb4feabd75fec161179eef023963 not found: ID does not exist" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.277438 4767 scope.go:117] "RemoveContainer" containerID="58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe" Jan 27 17:14:09 crc kubenswrapper[4767]: E0127 17:14:09.277908 4767 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe\": container with ID starting with 58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe not found: ID does not exist" containerID="58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe" Jan 27 17:14:09 crc kubenswrapper[4767]: I0127 17:14:09.277949 4767 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe"} err="failed to get container status \"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe\": rpc error: code = NotFound desc = could not find container \"58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe\": container with ID starting with 58d55b55ed52d5797151c3c5365ce33595eb4813a53769987163f5c77a649efe not found: ID does not exist" Jan 27 17:14:10 crc kubenswrapper[4767]: I0127 17:14:10.338554 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" path="/var/lib/kubelet/pods/308c1299-e10d-4cbe-a77e-8bd11de554bf/volumes" Jan 27 17:14:24 crc kubenswrapper[4767]: I0127 17:14:24.857665 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:14:24 crc kubenswrapper[4767]: I0127 17:14:24.858320 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:14:24 crc kubenswrapper[4767]: I0127 17:14:24.858393 4767 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" Jan 27 17:14:24 crc kubenswrapper[4767]: I0127 17:14:24.859325 4767 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"188e83ef0c8d51c542680fa9443636a09f171d2a14958392a94f0afbafab5ca9"} pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:14:24 crc kubenswrapper[4767]: I0127 17:14:24.859434 4767 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" containerID="cri-o://188e83ef0c8d51c542680fa9443636a09f171d2a14958392a94f0afbafab5ca9" gracePeriod=600 Jan 27 17:14:25 crc kubenswrapper[4767]: I0127 17:14:25.320743 4767 generic.go:334] "Generic (PLEG): container finished" podID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerID="188e83ef0c8d51c542680fa9443636a09f171d2a14958392a94f0afbafab5ca9" exitCode=0 Jan 27 17:14:25 crc kubenswrapper[4767]: I0127 17:14:25.320831 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerDied","Data":"188e83ef0c8d51c542680fa9443636a09f171d2a14958392a94f0afbafab5ca9"} Jan 27 17:14:25 crc kubenswrapper[4767]: I0127 17:14:25.321054 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" event={"ID":"6f3fb7f5-2925-4714-9e7b-44749885b298","Type":"ContainerStarted","Data":"2db156a84d01a2feb1bc4e248987569826468318b10842a7ff93c7bffe1c4ded"} Jan 27 17:14:25 crc kubenswrapper[4767]: I0127 17:14:25.321079 4767 scope.go:117] "RemoveContainer" containerID="0549b4185ddeecb614b985f539a76a4ac982f71672cc100dee116475a087e758" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.156028 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp"] Jan 27 17:15:00 crc kubenswrapper[4767]: E0127 17:15:00.156930 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="gather" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.156946 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="gather" Jan 27 17:15:00 crc kubenswrapper[4767]: E0127 17:15:00.156961 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="copy" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.156967 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="copy" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.157129 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="gather" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.157147 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="308c1299-e10d-4cbe-a77e-8bd11de554bf" containerName="copy" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.157707 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.163098 4767 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.163293 4767 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.169245 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp"] Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.256067 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.256134 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k99tl\" (UniqueName: \"kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.256186 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.357099 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k99tl\" (UniqueName: \"kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.357445 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.357609 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.358186 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.366104 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.380912 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k99tl\" (UniqueName: \"kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl\") pod \"collect-profiles-29492235-tbjsp\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.477545 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:00 crc kubenswrapper[4767]: I0127 17:15:00.869333 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp"] Jan 27 17:15:00 crc kubenswrapper[4767]: W0127 17:15:00.881718 4767 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1187e804_ba3a_4f0c_8719_c7627056887d.slice/crio-4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00 WatchSource:0}: Error finding container 4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00: Status 404 returned error can't find the container with id 4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00 Jan 27 17:15:01 crc kubenswrapper[4767]: I0127 17:15:01.623368 4767 generic.go:334] "Generic (PLEG): container finished" podID="1187e804-ba3a-4f0c-8719-c7627056887d" containerID="0b0b37759957513154b3fbff55033ebc585fafd94dd60b936ab84524d35a7139" exitCode=0 Jan 27 17:15:01 crc kubenswrapper[4767]: I0127 17:15:01.623417 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" event={"ID":"1187e804-ba3a-4f0c-8719-c7627056887d","Type":"ContainerDied","Data":"0b0b37759957513154b3fbff55033ebc585fafd94dd60b936ab84524d35a7139"} Jan 27 17:15:01 crc kubenswrapper[4767]: I0127 17:15:01.623446 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" event={"ID":"1187e804-ba3a-4f0c-8719-c7627056887d","Type":"ContainerStarted","Data":"4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00"} Jan 27 17:15:02 crc kubenswrapper[4767]: I0127 17:15:02.961978 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.099505 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume\") pod \"1187e804-ba3a-4f0c-8719-c7627056887d\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.099697 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume\") pod \"1187e804-ba3a-4f0c-8719-c7627056887d\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.100330 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume" (OuterVolumeSpecName: "config-volume") pod "1187e804-ba3a-4f0c-8719-c7627056887d" (UID: "1187e804-ba3a-4f0c-8719-c7627056887d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.100372 4767 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k99tl\" (UniqueName: \"kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl\") pod \"1187e804-ba3a-4f0c-8719-c7627056887d\" (UID: \"1187e804-ba3a-4f0c-8719-c7627056887d\") " Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.100687 4767 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1187e804-ba3a-4f0c-8719-c7627056887d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.112301 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1187e804-ba3a-4f0c-8719-c7627056887d" (UID: "1187e804-ba3a-4f0c-8719-c7627056887d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.113489 4767 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl" (OuterVolumeSpecName: "kube-api-access-k99tl") pod "1187e804-ba3a-4f0c-8719-c7627056887d" (UID: "1187e804-ba3a-4f0c-8719-c7627056887d"). InnerVolumeSpecName "kube-api-access-k99tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.202677 4767 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1187e804-ba3a-4f0c-8719-c7627056887d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.202739 4767 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k99tl\" (UniqueName: \"kubernetes.io/projected/1187e804-ba3a-4f0c-8719-c7627056887d-kube-api-access-k99tl\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.642579 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" event={"ID":"1187e804-ba3a-4f0c-8719-c7627056887d","Type":"ContainerDied","Data":"4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00"} Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.642625 4767 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-tbjsp" Jan 27 17:15:03 crc kubenswrapper[4767]: I0127 17:15:03.642632 4767 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf4f37d9b878572b4c69851b2119c6e31a977768f2fa6e93d04f47babe1fd00" Jan 27 17:15:04 crc kubenswrapper[4767]: I0127 17:15:04.033776 4767 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr"] Jan 27 17:15:04 crc kubenswrapper[4767]: I0127 17:15:04.039729 4767 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492190-zdlnr"] Jan 27 17:15:04 crc kubenswrapper[4767]: I0127 17:15:04.335148 4767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b64181-3546-4e58-9aeb-2b832dd80a1c" path="/var/lib/kubelet/pods/d0b64181-3546-4e58-9aeb-2b832dd80a1c/volumes" Jan 27 17:15:43 crc kubenswrapper[4767]: I0127 17:15:43.507667 4767 scope.go:117] "RemoveContainer" containerID="70e915370cab6f95e88757e1ff001304e2070d2b69aa437c3373fdf371fa85a5" Jan 27 17:16:54 crc kubenswrapper[4767]: I0127 17:16:54.857812 4767 patch_prober.go:28] interesting pod/machine-config-daemon-mrkmx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:16:54 crc kubenswrapper[4767]: I0127 17:16:54.858670 4767 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mrkmx" podUID="6f3fb7f5-2925-4714-9e7b-44749885b298" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.618078 4767 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vtwv6"] Jan 27 17:17:00 crc kubenswrapper[4767]: E0127 17:17:00.619648 4767 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1187e804-ba3a-4f0c-8719-c7627056887d" containerName="collect-profiles" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.619753 4767 state_mem.go:107] "Deleted CPUSet assignment" podUID="1187e804-ba3a-4f0c-8719-c7627056887d" containerName="collect-profiles" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.620022 4767 memory_manager.go:354] "RemoveStaleState removing state" podUID="1187e804-ba3a-4f0c-8719-c7627056887d" containerName="collect-profiles" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.623686 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.644057 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vtwv6"] Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.798814 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4sjh\" (UniqueName: \"kubernetes.io/projected/25c2db28-24c6-4db9-afa2-68fe4fb29900-kube-api-access-z4sjh\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.798891 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-utilities\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.798920 4767 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-catalog-content\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.900029 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4sjh\" (UniqueName: \"kubernetes.io/projected/25c2db28-24c6-4db9-afa2-68fe4fb29900-kube-api-access-z4sjh\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.900319 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-utilities\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.900349 4767 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-catalog-content\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.900818 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-catalog-content\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.900908 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25c2db28-24c6-4db9-afa2-68fe4fb29900-utilities\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.921960 4767 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4sjh\" (UniqueName: \"kubernetes.io/projected/25c2db28-24c6-4db9-afa2-68fe4fb29900-kube-api-access-z4sjh\") pod \"redhat-operators-vtwv6\" (UID: \"25c2db28-24c6-4db9-afa2-68fe4fb29900\") " pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:00 crc kubenswrapper[4767]: I0127 17:17:00.947934 4767 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtwv6" Jan 27 17:17:01 crc kubenswrapper[4767]: I0127 17:17:01.416903 4767 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vtwv6"] Jan 27 17:17:01 crc kubenswrapper[4767]: I0127 17:17:01.627174 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtwv6" event={"ID":"25c2db28-24c6-4db9-afa2-68fe4fb29900","Type":"ContainerStarted","Data":"5ef3ff8347449b34a46edb27bf65f73d8674553e30c3d3726b2349eaa9499c96"} Jan 27 17:17:01 crc kubenswrapper[4767]: I0127 17:17:01.627297 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtwv6" event={"ID":"25c2db28-24c6-4db9-afa2-68fe4fb29900","Type":"ContainerStarted","Data":"07b8aaeda0818b4bc1fd6e4be5c85b3e18e4bade99a255519aaf2d54cfd93b31"} Jan 27 17:17:01 crc kubenswrapper[4767]: I0127 17:17:01.629499 4767 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:17:02 crc kubenswrapper[4767]: I0127 17:17:02.636614 4767 generic.go:334] "Generic (PLEG): container finished" podID="25c2db28-24c6-4db9-afa2-68fe4fb29900" containerID="5ef3ff8347449b34a46edb27bf65f73d8674553e30c3d3726b2349eaa9499c96" exitCode=0 Jan 27 17:17:02 crc kubenswrapper[4767]: I0127 17:17:02.636660 4767 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtwv6" event={"ID":"25c2db28-24c6-4db9-afa2-68fe4fb29900","Type":"ContainerDied","Data":"5ef3ff8347449b34a46edb27bf65f73d8674553e30c3d3726b2349eaa9499c96"}